Explain all kinds of testing?

Answer Posted / ramyab.mca@gmail.com

Hai....

• Black box testing - not based on any knowledge of
internal design or code. Tests are based on requirements
and functionality.
• White box testing - based on knowledge of the internal
logic of an application's code. Tests are based on coverage
of code statements, branches, paths, conditions.
• unit testing - the most 'micro' scale of testing; to test
particular functions or code modules. Typically done by the
programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not
always easily done unless the application has a well-
designed architecture with tight code; may require
developing test driver modules or test harnesses.
• incremental integration testing - continuous testing of
an application as new functionality is added; requires that
various aspects of an application's functionality be
independent enough to work separately before all parts of
the program are completed, or that test drivers be
developed as needed; done by programmers or by testers.
• integration testing - testing of combined parts of an
application to determine if they function together
correctly. The 'parts' can be code modules, individual
applications, client and server applications on a network,
etc. This type of testing is especially relevant to
client/server and distributed systems.
• functional testing - black-box type testing geared to
functional requirements of an application; this type of
testing should be done by testers. This doesn't mean that
the programmers shouldn't check that their code works
before releasing it (which of course applies to any stage
of testing.)
• system testing - black-box type testing that is based on
overall requirements specifications; covers all combined
parts of a system.
• end-to-end testing - similar to system testing;
the 'macro' end of the test scale; involves testing of a
complete application environment in a situation that mimics
real-world use, such as interacting with a database, using
network communications, or interacting with other hardware,
applications, or systems if appropriate.
• sanity testing or smoke testing - typically an initial
testing effort to determine if a new software version is
performing well enough to accept it for a major testing
effort. For example, if the new software is crashing
systems every 5 minutes, bogging down systems to a crawl,
or corrupting databases, the software may not be in
a 'sane' enough condition to warrant further testing in its
current state.
• regression testing - re-testing after fixes or
modifications of the software or its environment. It can be
difficult to determine how much re-testing is needed,
especially near the end of the development cycle. Automated
testing tools can be especially useful for this type of
testing.
• acceptance testing - final testing based on
specifications of the end-user or customer, or based on use
by end-users/customers over some limited period of time.
• load testing - testing an application under heavy loads,
such as testing of a web site under a range of loads to
determine at what point the system's response time degrades
or fails.
• stress testing - term often used interchangeably
with 'load' and 'performance' testing. Also used to
describe such tests as system functional testing while
under unusually heavy loads, heavy repetition of certain
actions or inputs, input of large numerical values, large
complex queries to a database system, etc.
• performance testing - term often used interchangeably
with 'stress' and 'load' testing. Ideally 'performance'
testing (and any other 'type' of testing) is defined in
requirements documentation or QA or Test Plans.
• usability testing - testing for 'user-friendliness'.
Clearly this is subjective, and will depend on the targeted
end-user or customer. User interviews, surveys, video
recording of user sessions, and other techniques can be
used. Programmers and testers are usually not appropriate
as usability testers.
• install/uninstall testing - testing of full, partial, or
upgrade install/uninstall processes.
• recovery testing - testing how well a system recovers
from crashes, hardware failures, or other catastrophic
problems.
• security testing - testing how well the system protects
against unauthorized internal or external access, willful
damage, etc; may require sophisticated testing techniques.
• compatability testing - testing how well software
performs in a particular hardware/software/operating
system/network/etc. environment.
• exploratory testing - often taken to mean a creative,
informal software test that is not based on formal test
plans or test cases; testers may be learning the software
as they test it.
• ad-hoc testing - similar to exploratory testing, but
often taken to mean that the testers have significant
understanding of the software before testing it.
• user acceptance testing - determining if software is
satisfactory to an end-user or customer.
• comparison testing - comparing software weaknesses and
strengths to competing products.
• alpha testing - testing of an application when
development is nearing completion; minor design changes may
still be made as a result of such testing. Typically done
by end-users or others, not by programmers or testers.
• beta testing - testing when development and testing are
essentially completed and final bugs and problems need to
be found before final release. Typically done by end-users
or others, not by programmers or testers.
• mutation testing - a method for determining if a set of
test data or test cases is useful, by deliberately
introducing various code changes ('bugs') and retesting
with the original test data/cases to determine if
the 'bugs' are detected. Proper implementation requires
large computational resources.

K,Byeeee...
Thanks & Regards
B.Ramyasri

Is This Answer Correct ?    14 Yes 0 No



Post New Answer       View All Answers


Please Help Members By Posting Answers For Below Questions

Explain test metric and the information it provides.

672


How we can test Cookies Manually?

1797


wat are the questions would the project leader ask about the project?? please do send the answers early.

6543


What is defect validity ratio?

6766


What is the main use of preparing a traceability matrix?

700






What are the differences between stlc and sdlc?

600


can someone give me a brief idea about embedded testing.. i know both embedded system concepts and testing concepts.. i just want to know what we have to do for embedded testing

1851


hi all I have an overall experience of 12 years working in an EDP setup(worked in a manufacturing setup in a German MNC). On my request, I was relieved from my duties in Sep 06 for maternity reasons. However I would like to pursue a career in a software organization as a manual tester. I obtained my proficiency certificate in Software Testing from STAG SOFTWARE LIMITED 2 months ago i.e in May and would now like to continue working in this field. However I am unable to find a job until now. If anybody knows of a opening at bangalore, kindly let me know.

1548


tell wat u did in ur banking project?

1848


tell me brief about your project? please help me how to explain the project what is the flow? my current project is ERP domain web based application. please help me

1540


If anybody from banglore having the real time exp in manual testing with 3+ yrs exp and looking for change send your profile to vinodhanandhan@gmail.com

1500


What do you think the role of test-group manager should be? Relative to senior management?

1667


What and why is api testing?

943


Hi i am looking for Test Engineer Job, i have done the course, but i don't have work experience on it, Is it So important to have real time Exp or knowledge is enough? and please let me know like how i can improve for the interview. You can send me a mail on this id: rashmi_vastred@yahoo.co.in

1617


How quickly we need to fix the bug? Or how soon the bug should get fixed?

703