The ISTQB define 7 testing principles, two of them are:
Defect Clustering, which means that some modules may
contain most of the defects discovered during pre-release
testing, or are responsible for the most operational
failures, which means focusing on the most defected area.
Pesticide paradox, means you need to updates your test s
after repeating over and over again, because eventually the
same set of test cases will no longer find any new defects.
Pesticide paradox describes a common problem in
exterminating bugs -- both the 6-legged and software
Insects that survive the use of pesticide are those that
are more immune to the poison than others. These bugs'
offspring are then also likely to be immune to the
pesticide. This requires that pesticide companies
continually work at developing new poisons that will kill
whatever survived their previous products.
The same occurs in software. If we create bug prevention or
testing processes that tend to prevent and find particular
types of bugs, other types of bugs will thrive because our
attention is focused elsewhere. Also, designers,
developers , and testers are likely to learn from their
mistakes and not make the same mistakes again -- they will
make new ones. This causes scripted testing -- whether
manual or automated -- to become less effective over time.
Good testing requires continual development of new testing
ideas. Old regression scripts can be useful but testing
should not stop there. Regression tests will not find the
bugs that are lurking off the beaten path.
Joe made the point clear, density has got nothing to do
with clustering. Yes, yuva could have meant "higher
density" but even then that is not what is "important"
Many things in the world reflect the "principle of factor
sparsity" and Software Testing is no exception. For
example, in two dimensional arrays used in most programs,
we find that they are sparse. Let me make it simple here-
long ago, an Italian economist named Pareto highlighted
that 80% of Italy's wealth was owned by 20%. From then
onwards it has become a generic tool of conclusion, and was
applied to many things of the world we live in.
There exists a hypothesis (with a reasonable amount of
reality in it) that bugs are typically clustered in one
area. Something like 80% of the defects being present in
20% of the overall codebase.
So, when you hear that an application has 1000 known bugs
and look at the detail of their distribution, many of them
perhaps originate from 20% of the modules therefore forming
a "cluster of defects".
If you were to buy this principle, then you have to start
thinking of where you can apply this- one example would be
to suspend testing if you sniff a cluster (instead of
finding all the rest of the defects, perhaps an upfront
alert from the testers could make the developers seriously
review that portion of the app again in a hope to fix the
found + about_to_be_found defects). This requires
speculative intelligence and is risky, though there is an
engineering endorsement to the principle. Another example
would be (in case where the project is over and the test
summary information is available for post-mortem), one
could do a thorough Root-cause-analysis on the cluster.
Defect Clustering: Finding more defects in one particular
area (or)in one functionality. The reasons Might be (1) Lack
of dev Experience (2) poor Requirements (3)Several Defects
are fixed by many developers to same functionality (4)
Fixing of new defect can cause new defect in the same
Pesticide Paradox: We need to updated our Test Cases as
executing the same old test cases will no longer produce new