1 what is smoke and sanity testing , when it will be
performed ?

Answer Posted / sharada

What is the difference between sanity testing and smoke
testing when we conduct these tests on our application?



There's no scientific definition for sanity testing and
smoke testing, and I'm sure someone will take issue with
this answer no matter how I phrase it. Regardless, I use
these terms regularly in my test management. Smoke testing,
to me, is testing performed daily after a new build has
been created. Sanity testing, on the other hand, probes
around an application after a fix has been made. This is an
extension to, not a replacement for, regression testing.

I've always envisioned the phrase 'smoke testing' getting
started back in the early twentieth century with the first
few car manufacturers. Once the car had been assembled,
someone oiled the lifters, poured some gas in the tank, and
fired it up. As the car ran, they looked for smoke where it
didn't belong. In IT, smoke testing is pretty much the
same. Grab the most recent build, fire it up, and run a
series of very high-level tests, looking for major
failures. My test organizations have all taken the same
approach -- our smoke testing was broad and shallow. We
probed each significant feature area, making sure it was
even functional and accessible. If smoke tests passed,
testers (or the infrastructure team) could invest time in
deploying this latest build. Testing then continued on,
with each tester pushing deeper into their feature area.

Another use of smoke testing is to probe a configuration
before running a long test pass. For instance, in my
performance test work, I will have a build deployed, set up
my tools, and run a quick (30 seconds, 5 minutes...it
depends on the size of the test) pass against everything at
a low load/transaction rate. This is just to prove
everything works fine. There's nothing like getting a 5-
hour performance pass set up and kicked off, only to find
the database is non-responsive or there's a problem with
the VLAN somewhere!

The key to good smoke tests is that they are broad and
shallow, and their goal is to just ensure basic
functionality. They're an 'all clear' shout to the rest of
the organization that they can jump in on the new build.

Sanity testing, on the other hand, is the testing I do
after regressing a major fix right before release. I had a
test manager who frequently referred to the things we do in
test as 'healthy paranoia' and sanity testing is a perfect
example. When a project is winding down to the finish, we
start cutting release candidates. Test goes to work on
those RCs -- it's funny, but no matter how hard we test
during the development/stabilization cycle, we always seem
to find bugs in RC mode. When a bug is found and accepted
for fix, it's up to the test organization to regress that
fix. Regression testing is a well-understood concept: it's
the act of making sure a recent fix 1) fixed the problem as
intended and 2) didn't cause new bugs in affected or
related areas.

Sanity testing is the next stage in that process. After
regressing a fix, that healthy paranoia kicks in and it's
time for testers to probe the rest of the release looking
for any potential downstream impacts. It's also making sure
that any dependencies built appropriately (ie, if your
application is split between an .exe and a few .dlls, while
the bug may have been fixed in the .exe it's important to
fire up each dll and ensure it built appropriately, etc.).
Whereas smoke testing is generally scripted, focuses only
on high-priority cases and is not intended to find low
priority bugs and such, sanity testing is generally ad-hoc
(unscripted), broad yet deep, and can find either high or
low priority bugs. This is where experience, and a little
paranoia, pays off. I have personally seen the strangest
issues come up during my sanity testing, after deep
regression yielded nothing.

Software testing resources:
How to conduct regression tests

Automating regression test cases

How to define a test strategy


Another definition of the term 'sanity testing' is somewhat
related. When a new operating system or other core
dependency shipped, my teams in the past have run some form
of testing. If the dependency is low, we'd talk about these
tests as 'quick sanity checks.' For instance, I used to
work in Mobile Devices at Microsoft, on the ActiveSync
team. There are two components to ActiveSync -- there's the
desktop (or server) component, and there is the device
component. If the PocketPC team made a chance to, for
instance, Pocket Outlook, we would be sure to run a test
pass -- if the change had little or nothing to do with
actual inbound and outbound mail (say it was a fix to
address book integration), we'd run 'a quick sanity pass'
with feature owners validating their features. Rather than
running through each and every test case, or picking a
certain set of cases by priority, feature owners would
simply carve out a chunk of the day and spend a few hours
in focused, ad-hoc testing. The goal was to be comfortable
that the changes made didn't affect our features. Sanity
testing was only a viable option, however, when changes
hadn't been made in our core code. If fixes were made
within the Sync code, we would run a formal regression test
pass -- and then sanity check other areas of our product.

Is This Answer Correct ?    1 Yes 0 No



Post New Answer       View All Answers


Please Help Members By Posting Answers For Below Questions

write 5 critical test cases on product payment form on flipkart.com?

1490


Why we have to do manual testing?

1192


Write down ten test cases for below scenario: Increase the day by 1 and date format should be mm/dd/yyyy Conditions are : date should not be greater then 31 and date<1 month should not be greater then 12 and month<1 year should not be greater then 10000 and year<1 Thanks in advance

1759


What is security Hole ? Is this possible Some one can acess any of restricted page (like admin side)? How you test this previllaze?

1907


What is a defect?

643






How did you do unit testing in your project?

687


Can you explain tpa analysis?

618


According to RBI rule how much money can be transferred through online banking at one time...?

2380


If date field is a text field write test cases to validate it?

1939


write test cases on wall

1711


Explain briefly your project.

1689


Tell me agile process in your company

5643


how will you test a wine filled bottle along with 100 glasses

1855


Can any one say me how to do Performance testing step by step plz for a desktop application(offline application).all the data is stored in internal server itself can any plz help me. ts quite urgent friends.

1482


why QA team is necessary for an organization?

2598