1 what is smoke and sanity testing , when it will be
performed ?

Answer Posted / sharada

What is the difference between sanity testing and smoke
testing when we conduct these tests on our application?



There's no scientific definition for sanity testing and
smoke testing, and I'm sure someone will take issue with
this answer no matter how I phrase it. Regardless, I use
these terms regularly in my test management. Smoke testing,
to me, is testing performed daily after a new build has
been created. Sanity testing, on the other hand, probes
around an application after a fix has been made. This is an
extension to, not a replacement for, regression testing.

I've always envisioned the phrase 'smoke testing' getting
started back in the early twentieth century with the first
few car manufacturers. Once the car had been assembled,
someone oiled the lifters, poured some gas in the tank, and
fired it up. As the car ran, they looked for smoke where it
didn't belong. In IT, smoke testing is pretty much the
same. Grab the most recent build, fire it up, and run a
series of very high-level tests, looking for major
failures. My test organizations have all taken the same
approach -- our smoke testing was broad and shallow. We
probed each significant feature area, making sure it was
even functional and accessible. If smoke tests passed,
testers (or the infrastructure team) could invest time in
deploying this latest build. Testing then continued on,
with each tester pushing deeper into their feature area.

Another use of smoke testing is to probe a configuration
before running a long test pass. For instance, in my
performance test work, I will have a build deployed, set up
my tools, and run a quick (30 seconds, 5 minutes...it
depends on the size of the test) pass against everything at
a low load/transaction rate. This is just to prove
everything works fine. There's nothing like getting a 5-
hour performance pass set up and kicked off, only to find
the database is non-responsive or there's a problem with
the VLAN somewhere!

The key to good smoke tests is that they are broad and
shallow, and their goal is to just ensure basic
functionality. They're an 'all clear' shout to the rest of
the organization that they can jump in on the new build.

Sanity testing, on the other hand, is the testing I do
after regressing a major fix right before release. I had a
test manager who frequently referred to the things we do in
test as 'healthy paranoia' and sanity testing is a perfect
example. When a project is winding down to the finish, we
start cutting release candidates. Test goes to work on
those RCs -- it's funny, but no matter how hard we test
during the development/stabilization cycle, we always seem
to find bugs in RC mode. When a bug is found and accepted
for fix, it's up to the test organization to regress that
fix. Regression testing is a well-understood concept: it's
the act of making sure a recent fix 1) fixed the problem as
intended and 2) didn't cause new bugs in affected or
related areas.

Sanity testing is the next stage in that process. After
regressing a fix, that healthy paranoia kicks in and it's
time for testers to probe the rest of the release looking
for any potential downstream impacts. It's also making sure
that any dependencies built appropriately (ie, if your
application is split between an .exe and a few .dlls, while
the bug may have been fixed in the .exe it's important to
fire up each dll and ensure it built appropriately, etc.).
Whereas smoke testing is generally scripted, focuses only
on high-priority cases and is not intended to find low
priority bugs and such, sanity testing is generally ad-hoc
(unscripted), broad yet deep, and can find either high or
low priority bugs. This is where experience, and a little
paranoia, pays off. I have personally seen the strangest
issues come up during my sanity testing, after deep
regression yielded nothing.

Software testing resources:
How to conduct regression tests

Automating regression test cases

How to define a test strategy


Another definition of the term 'sanity testing' is somewhat
related. When a new operating system or other core
dependency shipped, my teams in the past have run some form
of testing. If the dependency is low, we'd talk about these
tests as 'quick sanity checks.' For instance, I used to
work in Mobile Devices at Microsoft, on the ActiveSync
team. There are two components to ActiveSync -- there's the
desktop (or server) component, and there is the device
component. If the PocketPC team made a chance to, for
instance, Pocket Outlook, we would be sure to run a test
pass -- if the change had little or nothing to do with
actual inbound and outbound mail (say it was a fix to
address book integration), we'd run 'a quick sanity pass'
with feature owners validating their features. Rather than
running through each and every test case, or picking a
certain set of cases by priority, feature owners would
simply carve out a chunk of the day and spend a few hours
in focused, ad-hoc testing. The goal was to be comfortable
that the changes made didn't affect our features. Sanity
testing was only a viable option, however, when changes
hadn't been made in our core code. If fixes were made
within the Sync code, we would run a formal regression test
pass -- and then sanity check other areas of our product.

Is This Answer Correct ?    1 Yes 0 No



Post New Answer       View All Answers


Please Help Members By Posting Answers For Below Questions

Testcases for msword( microsoct office word??

1666


I have interview on Guide wire testing. If you please share me.

428


hai friends i did my bcom (distance with out inter)and iam pursuing mca(distance alagappa university)iam learning testing cource in ameerpet can i elgibel for testing jobs pls guide me … …and send details to rangavijetha@gmail.com..thanks

1356


1.How to maintain the Bug status Report? 2.What is project based Company and product based company?

7104


What were the major challenges u faced while testing

2061






How do u go about testing of web application?

1651


What is meant my Firewall testing? how the testing is performed on it?

1791


How to integrate Bugzilla with QTP?

2290


What is the main use of preparing traceability matrix and explain the real time usage?

745


How does quality control differ from quality assurance?

636


How to do Performance Testing manually for a Flash Application?

1856


Please is there any one working with ITR.COM? Having interview with them. Any tips will help.

1397


How do you scope,organise and execute a test project.

1655


In Quality Center, If we can reuse a non reusable test script as template test, then what is the need of defining reusable test scripts? What additional functionality does Reusable test scripts add when compared to non reusable test scripts?

2562


Define what is a critical bug.

674