Nowhere is the saying
"beauty is in the eye of the beholder" more true than in the area of
software. Perhaps because software is not a tangible product, each beholder is
likely to have a different eye for what constitutes a quality system. This can
prove to be a challenge for projects that deliver software systems, however
there are some measures that the project manager can take to improve the
chances of delivering a working system to the organization that everyone can
agree measures up to the organization's need for quality. This is not intended
to be a comprehensive article on the art of software testing, or a "how
to" on the subject. The business of testing software is best left to the
professionals in your Quality Assurance group. However there are some tips that
I can offer the project manager that, when combined with the QA group's
expertise, will improve the chances of meeting your organization's need for
quality.
1.
Make sure that quality
targets are set for the system. The question is "What does a quality
system look like". You need to have your stakeholders describe this to you
so you can communicate these goals and objectives to the team. Targets should
be measurable and the project should have access to measurement tools. An
example of a measurable target: the system cannot pass QA testing with any
severity 1 defect. The project should have access to a problem reporting tool
capable of recording and tracking defects.
2.
Targets should be set
for all aspects of quality. The obvious targets are those that have to do with
functionality: does the system do what it is supposed to do? But there are
other facets of quality such as performance: does the system perform the task
as quickly as it should, or load: does the system accommodate as many
simultaneous tasks as it should. If the system is expected to serve a user
community of 1,000 users, can all 1,000 log on at once? 500? Can the users
perform their tasks once they're logged on to the system and experience the
desired level of performance? Finally, does the system fail as it should? A
system that is designed to accommodate 500 users at one time should provide the
501st user with an appropriate error message telling them of the system
limitation.
3.
Quality goals and
objectives are an important deliverable and should be part of the project plans
that get approved by the stakeholders in the gate meeting that marks the
transition from planning to implementation. Approval by the recognized
stakeholders will show the organization that your team has a clear idea what
quality looks like and is capable of demonstrating achievement.
4.
Quality activities
should be governed by a plan and it's your responsibility to author the plan.
The plan should identify all the different types of tests and all the
activities required to perform them. Activities should include setting up
writing the test plans, writing test scripts, setting up the test environments,
developing test data, populating databases with test data, and initiating the
project in the issue reporting tool. Don't forget to include any activities
necessary to implement automated test tools, license purchases and etcetera.
The plan shouldn't include the individual test plans. These are usually
developed in parallel with the software by the QA group. The quality plan is
component of the project plans and should be approved with the rest of the
plans at the gate review meeting marking the transition from planning to
implementation. The identification of quality goals and objectives shows the
organization that your team knows what quality looks like and the approval of
the quality plan shows it that your team knows how to achieve the quality goals
and objectives.
5.
Plan to eliminate as
many defects from the software as early in the development cycle as possible.
Remember that the cost of fixing defects increases from development, to system
testing, to QA testing, to Beta testing, to production. This increase in cost
is not linear, in some cases it can be exponential so planning to eliminate as
many defects as early as possible will deliver your quality targets at the
lowest price possible. Some of the cheapest testing comes from reviews: design
document and code. These reviews or walkthroughs use the experience on your
team to identify mistakes before time and effort is expended on testing. The
second cheapest form of testing is unit testing. Make sure that your
programmers have the tools they need to unit test. There are many excellent
automated test tools out there that save considerable time building the
envelopes, stubs, and drivers that are otherwise necessary for unit testing.
6.
Most software systems
will have a lifecycle that includes upgrades or new releases. New functionality
and features in a new release are the primary focus of attention but some form
of regression testing should be done to ensure that no existing features or
functionality has been broken in the new release. The best source of these
regression tests are the tests produced for your project so don't discard or
lose track of them; store them in a location easily accessible by the next
project - you could be the project manager so the time you save here may
directly benefit you! Regression testing can be labour intensive and time
consuming which is why software vendors such as Hewlett Packard offer automated
testing tools. The time and cost invested in the initial project to introduce
an automated testing tool can be a powerful weapon for reducing maintenance
costs of the system. Your project's sponsor may be unwilling to spend the money
on automated tools at this point, but at least make the business case so they
can make an informed decision.
7.
Review the test cases to
ensure that each one has a set of pre-conditions for running the test and criteria
for passing. Anyone capable of using the system should be able to take the test
case and execute it without having to resort to consultation with the author.
8.
Ensure that everyone is
trained on the use of the issue tracking tool your organization uses. It's up
to each member of the QA test team to create a report when a test case fails
and it's up to the development team to resolve the issue. The process usually
works fairly smoothly right up to the point of closing the ticket. Closing the
ticket tends to be viewed as a lower priority than resolving the ticket so
metrics can become skewed as the number of resolved but unclosed tickets
builds.
9.
Include testing metrics
in your project progress or status reports and make certain that the metrics
include the quality goals and objectives described in your charter, SOW, or
scope statement. Your progress reports should also form the base for your Gate
Review meeting materials; the metrics are the means by which you will prove
you've met the gating criteria for quality.
Article Source: http://EzineArticles.com/?expert=Dave_Nielsen
Article Source: http://EzineArticles.com/7189549
No comments:
Post a Comment