lakshmanaraj bg


{ City } mumbai
< Country > india
* Profession *
User No # 89319
Total Questions Posted # 0
Total Answers Posted # 65

Total Answers Posted for My Questions # 0
Total Views for My Questions # 0

Users Marked my Answers as Correct # 124
Users Marked my Answers as Wrong # 35
Questions / { lakshmanaraj bg }
Questions Answers Category Views Company eMail




Answers / { lakshmanaraj bg }

Question { Microsoft, 45023 }

What is automation Framework?


Answer

A automation framework is a set of assumptions, concepts and tools that provide support for automated software testing.

The main advantage of such a framework is the low cost for maintenance.

Automated testing is the process of running part of whole of the software testing activity by using automation tools.

The Frameworks are divided into five basic types,

1. Test Script Modularity Framework
2. Test Library Architecture Framework
3. Keyword-Driven or Table-Driven Testing Framework
4. Data-Driven Testing Framework
5. Hybrid Test Automation Framework

Thus, these are the some basic idea of an automaton framework.

Is This Answer Correct ?    9 Yes 0 No

Question { 10669 }

what is Software testing?


Answer

Software testing:

Software testing is an essential part of software development which is used to identify the correctness, completeness and quality of developed software.


Software testing is the process used to help identify
the correctness, completeness, security & quality of
developed computer software.

Its main objects are to detect error in software.

Unit testing :( Dynamic)

In Unit Testing, Different modules are tested, against
the specifications produced during design for the modules.

Unit Testing is the first level of dynamic testing and
is first the responsibility of the developers and then of the testers.

Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

INTEGRATION TESTING: (Dynamic)

Many Unit Tested Modules are combined into subsystems, which are then tested.

The goal is to see if the modules can be integrated properly.

1. BIG BANG APPROACH: (Non-incremental)

- Combines all programs/modules/subsystems for execution
at once.

Disadvantages:

Tracing down of defect is not easy.

2. TOP DOWN APPROACH:

In this approach testing is conducted from main module to sub module.

If the sub module is not developed a temporary program called STUB is used for simulate the sub module.

3. BOTTOM UP APPROACH:

In this approach testing is conducted from sub module to main module.

If the main module is not developed a temporary program called DRIVERS is used to simulate the main module.

SYSTEM TESTING:

Here Testing conducted on a complete, integrated system
to evaluate the system's compliance with its specified requirements.

Compete software build is made and tested to show, that all requirements are met.

TYPES OF SYSTEM TESTING:

-VOLUME TESTING:

To find the weakness in the system with respect to its handling of large amount of data, during short
time period. (Focus is amount of data)

- STRESS TESTING:

The purpose of stress testing is, to test the system capacity, whether it is handling large number of
processing transactions during peak periods. (Moment)

- CONCURRENCY TESTING:

It is similar to Stress Testing; here we are checking the system capacity to handle large number of processing transactions in an INSTANT.

-PERFORMANCE TESTING:

System performance can be accomplished in parallel with volume and stress testing, because system performance
is assessed under all conditions.

System performance is generally assessed in terms of response time and throughput rates, under
different processing and configuration condition.

REGRESSION TESTING:

Is the re-execution of same subsets of test cases that have already executed, to ensure that changes (after defect fix) have not propagated unintended side effects?

Regression Testing is the activity that helps to ensure that changes do not introduce unintended behavior or additional bugs.

SECURITY TESTING:

Attempts to verify that protection mechanisms built into
a system will infact protect it from improper penetration.

System is protected in accordance with importance to organization, with respect to security levels.

RECOVERY TESTING:

Forcing the system to fail in different ways and checking how fast it recovers from fail.

COMPATIBILITY TESTING:

Checking whether the system is functionally consistent across all platforms.

SERVER TESTING:

Here we have to check Volume, Stress, Performance, data recovery testing, backup and restore testing,
error trapping data security, as a whole.

Here we have to check the PAIN ( e business concept).

WEB TESTING:

In web testing we have to do compatibility testing, browser compatibility, video testing (pixel- testing on font
and alignment)
modem speed, web security testing and directory set up.

This is a real time and highly tedious to web testing.

Automated tool is a must to do web testing.

ACCEPTANCE TESTING:

Acceptance testing (also known as user acceptance testing) is a type of testing carried out in order to verify if the product is developed as per the standards and
specified criteria and meets all the requirements
specified by customer.

ALPHA TESTING:
Alpha testing is conducted at the developers place, by the customer. The software is tested in a natural setting with the developer 'looking over the shoulder' of the user
(i.e. customer) and recording errors and usage problems.

Alpha test are conducted in a controlled environment.


BETA TESTING:

Beta Testing is conducted at one or more customer sites by the end user of the software.

Here the developer is not present during testing.

Here the client tests the software or system in his place and recording defects and sending his comments to development team.

So the above is the detailed description about the System Testing.

static testing:

During static testing (verification) you have a checklist
to check whether the work you are doing is going as
per the set standards of the organization.

These standards can be for Coding, Integrating and Deployment.

Dynamic Testing:

Dynamic Testing (validation) involves working with
the software, giving input values and checking if
the output is as expected.

Random testing:

Random testing as the name suggests has no particular approach to test.

It is an ad hoc way of testing. The tester randomly picks modules to test by inputting random values.

E.g. an output is produced by a particular combination of inputs. Hence, different and random inputs are used.


Monkey testing:

Monkey testing is a type of random testing with no specific test case written.

It has no fixed perspective for testing.

E.g. input random and garbage values in an input box.

Non Functional Testing:

-Performance Testing:

Performance Testing is done to determine the software characteristics like response time, throughput or MIPS (Millions of instructions per second) at which the system/software operates.

-Load Testing:

Load testing tests the software or component with increasing load, number of concurrent users or transactions is increased and the behavior of the system is examined and checked what load can be handled by the software.

The main objective of load testing is to determine the response time of the software for critical transactions and make sure that they are within the specified limit.

-Stress Testing:

Stress testing tests the software with a focus to check that the software does not crashes if the hardware resources (like memory, CPU, Disk Space) are not sufficient.

Examples:
Stress Test of the CPU will be done by running software application with 100% load for some days which will ensure that the software runs properly under normal usage conditions.


Suppose you have some software which has minimum memory requirement of 512 MB RAM then the software application
is tested on a machine which has 512 MB memory with extensive loads to find out the system/software behavior.

-Usability Testing:

Usability means the software's capability to be learned and understood easily and how attractive it looks to the end user.

Compatibility Testing:

Testing whether the system is compatible with other systems with which it should communicate.

Compatibility testing measures how well pages display on different clients.

For example: browsers, different browser version, different operating systems, and different machines.

At issue are the different implementations of HTML by the various browser manufacturers and the different machine platform display and rendering characteristics.

Also called browser compatibility testing and cross-browser testing.

Compatibility is a product feature and can have different levels of compliance.

White box testing:

1. Basis path testing.
2. Flow graph notation
3. Cyclomatic complexity.
4 Deriving test cases.
5. Graphic metrics.

Control structure testing:

1. Conditions testing.
2. Dataflow testing.
3. Loop testing.

structural and behavioral Testing:

Structural Testing is White Box Testing and where as Behavioral Testing is Black Box Testing.


migration testing:

1. Changing of an application or changing of their versions and conducting testing is migration testing.

2. Testing of programs or procedures used to convert data from existing systems for use in replacement systems.


Localization testing:

Localization Testing is nothing but testing the languages
now a day’s all applications r developing in all
languages so here we will check the application in
different languages.

Development Testing:

Development Testing denotes the aspect the aspects of test design and implementation most appropriate for the team developers to undertake.

In most cases test execution initially occurs with the developer testing group who designed and implemented the test, but
it is a good practice for the developer to create their tests in such a way so as to make them available to independent testing groups for execution.

Independent testing:

Independent testing denotes the test design and implementation most appropriately performed by someone
who is independent from the team of developers.

In most cases test execution initially occurs with the independent testing group that design and implement the test, but
the independent tester should create their testes to make them available to the developer testing groups for execution.

Goal of Software Testing:

* Demonstrate That Faults Are Not Present.
* Find Errors.
* Ensure That All the Functionality Is Implemented.
* Ensure the Customer Will Be Able To Get His Work Done.

IT TAKES ME ONE HOUR TO FINISH THIS..

THANK YOU.

Is This Answer Correct ?    4 Yes 0 No


Question { 8051 }

what is Softwar Audit?


Answer

Software audits are distinct from software peer reviews and software management reviews in that they are conducted by personnel external to, and independent of, the software development organization, and are concerned with compliance of products or processes, rather than with their technical content, technical quality, or managerial implications.

"The purpose of a software audit is to provide an independent evaluation of conformance of software products and processes to applicable regulations, standards, guidelines, plans, and procedures".

Software audit can mean:

1. A software licensing audit, where a user of software is audited for licence compliance.

2. software quality assurance, where a piece of software is audited for quality.

3. A software audit review, where a group of people external to a software development organisation examines a software product.

4. A physical configuration audit.

5. A functional configuration audit.

Is This Answer Correct ?    4 Yes 0 No

Question { 8051 }

what is Softwar Audit?


Answer

Software audits are distinct from software peer reviews and software management reviews in that they are conducted by personnel external to, and independent of, the software development organization, and are concerned with compliance of products or processes, rather than with their technical content, technical quality, or managerial implications.

"The purpose of a software audit is to provide an independent evaluation of conformance of software products and processes to applicable regulations, standards, guidelines, plans, and procedures".

Software audit can mean:

1. A software licensing audit, where a user of software is audited for licence compliance.

2. software quality assurance, where a piece of software is audited for quality.

3. A software audit review, where a group of people external to a software development organisation examines a software product.

4. A physical configuration audit.

5. A functional configuration audit.

Is This Answer Correct ?    1 Yes 0 No

Question { Logica CMG, 8337 }

PET model architecture..?


Answer

Every one know that PET is a refinement form of V-model.

PET means,
"PROCESS INVOLVES EXPERTS TOOLS AND TECHNIQUES".

The V-model is a software development model which can be presumed to be the extension of the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing.

PET model architecture plays an important role in SORTWARE TESTING..

Is This Answer Correct ?    1 Yes 0 No

Question { AZTEC, 10750 }

what are test metrics?


Answer

Test metrics accomplish in analyzing the current level of maturity in testing and give a projection on how to go about testing activities by allowing us to set goals and predict future trends.

Metrics are measurements. It is as simple as that. We use them all the time in our everyday lives.

There are two basis types of metrics.

The first type is the elemental or basic measurement such as weight, length, time, volume, and in this example, cost.

The second type is derived, normally from the elemental measurements. At the meat counter, the derived metric is dollars/weight (VIZ. $7.49/kg). This is called a normalized metric.

Is This Answer Correct ?    1 Yes 0 No

Question { IBM, 22882 }

What is the difference between bottom-up and top-down
integraion?Which is effecive.


Answer

Integration testing:

Once the units have been written, the next stage would be to put them together to create the system. This is called integration.

It involves building something large from a number of smaller pieces.

The purpose of integration testing is to expose defects in the interfaces and in the interactions between integrated components or systems.

The test bases for integration testing can include:

1) The software and system design.

2) A diagram of the system architecture.

3) Workflows and use-cases.

The test objects would essentially be the interface code.

This can include subsystems' database implementations.

Before integration testing can be planned, an integration strategy is required.

This involves making decisions on how the system will be put together prior to testing.

There are three commonly quoted integration strategies, namely:

1) Big-Bang Integration.

2) Top-Down Integration.

3) Bottom-Up Integration.

Big-Bang Integration:

This is where all units are linked at once, resulting in a complete system.

When testing of this system is conducted, it is difficult to isolate any errors found, because attention is not paid to verifying the interfaces across individual units.

Top-Down Integration:

This is where the system is built in stages, starting with components that call other components.

Bottom-up Integration:

This is the opposite of top-down integration and the components are integrated in a bottom-up order.

There may be more than one level of integration testing.

For example:

Component integration testing focuses on the interactions between software components and is done after component (unit) testing.

Developers usually carry out this type of integration testing.

System integration testing focuses on the interactions between different systems and may be done after system testing of each individual system.

For example, a trading system in an investment bank will interact with the stock exchange to get the latest prices for its stocks and shares on the international market.

Testers usually carry out this type of integration testing.

It should be noted that testing at system integration level carries extra elements of risk.

These can include: at a technical level, cross-platform

issues; at an operational level, business workflow issues; and at a business level, risks associated with ownership of regression issues associated with change in one system possibly having a knock-on effect on other systems.

Is This Answer Correct ?    3 Yes 0 No

Question { 22186 }

What r the different FUNCTIONAL testing techniques?


Answer

Functionality Testing:

Test for – all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.

Check all the links:

•Test the outgoing links from all the pages from specific domain under test.

•Test all internal links.

•Test links jumping on the same pages.

•Test links used to send the email to admin or other users from web pages.

•Test to check if there are any orphan pages.

•Lastly in link checking, check for broken links in all above-mentioned links.

Test forms in all pages:

Forms are the integral part of any web site.
Forms are used to get information from users and to keep interaction with them.
So what should be checked on these forms?

•First check all the validations on each field.

•Check for the default values of fields.

•Wrong inputs to the fields in the forms.

•Options to create forms if any, form delete, view or modify the forms.

Is This Answer Correct ?    2 Yes 0 No

Question { 6611 }

what is meant by positive testing ?


Answer

Positive Testing is carried with positive perceptions in otherwords testing carried out on the module taking input values which match the customer requirements.

Positive Testing is carried with an idea of checking whether the Application works as per requirements or not. In otherwords, making sure of whether the system does what it should really is intended to do(generally this is a typical developer's attitude)

Is This Answer Correct ?    2 Yes 0 No

Question { TCS, 24846 }

what is meant by negative testing..give me one example?


Answer

Testing the system by negative data is called negative testing.

EXAMPLE:
Testing the password where it should be minimum 8 character.
so testing it with 6 character is called negative testing.

Is This Answer Correct ?    1 Yes 0 No

Question { Kanbay, 8177 }

what is compliance testing


Answer

compliance testing:

The Compliance Testing Program Policies & Procedures provides a well-documented process for testing compliance of implementations of OGC standards.

The purpose of the OGC Compliance & Interoperability Testing & Evaluation (CITE) program, also known as the OGC Compliance Testing Program, is to increase systems interoperability while reducing technology risks.

It accomplishes this by providing a process whereby compliance for OGC standards can be tested.

This program provides a mechanism by which users and buyers of software that implements OGC standards can be certain that the software follows the mandatory rules of implementation as specified in the standard.

Is This Answer Correct ?    1 Yes 0 No

Question { 4716 }

Formal Testing?


Answer

Formal testing gets always importance by the good tester.

Following are the merits of formal testing, they are shown in below,

• Metrics help in easy tracking of the project status and also helps in presenting the statistics to the senior
management in an organized way.
• Back tracking can be very easy if every action is tracked appropriately.
• Metrics and reports help in collecting historical data using which further processes in testing can be made
more effective
• Project control, change control and reporting can be easily accomplished without hassles and confusion.
• Software Metrics and reports help in both project management and process management
• Metrics can directly influence both the efficiency and effectiveness of a software
• Helps in early defect detection and defect removal thurs reduceing the cost of defects
• Assists the managers in effective decision making
• Metrics also act as a benchmark for estimations and provide bottleneck to the testers
• Manages risk at ease

Is This Answer Correct ?    1 Yes 0 No

Question { Symphony, 9993 }

What are the different types of testing u r doing in ur
project


Answer

ITS IS BASED ON THE PROJECTS..

some of new updated testing are given below,

Summary of new test terms and box-categories, accumulated here thus far:

Baked Beans (Brown box - see Seat-o'-the-pants)
Blizzard (White box)
Blow-away (Fast box)
Bon-bon (Dark wrapper)
Brown See baked beans
Brush Testing
Concrete (Gray box)
Darwin Candidacy Testing
Discovery (Pallet of boxes)
Duck (Defensive box)
Flagpole (Blue sky box)
Float (Buoyant box)
Fly (Blue sky box)
Galleon (Buoyant box)
Goo (Brown box - see Bon-bon)
Gorilla (Brute box)
Guerilla (Brute box)
Hurricane (Gray box)
Monkey (Swing box)
Paddle (Buoyant box)
Purple Polka Dot Testing (haze box testing)
Seat-o'-the-pants (Brown box - see Baked Beans)
Soot (Black box)
Spin (Defensive box)
Surgical Strike (Sanitary box)
Tornado (Gray box)
Typhoon (Gray box)
Wind tunnel (Sheer box)
Witch (Black box)
X-Testing (Black box)
Yellow Aroma indicates the need for further testing.

Is This Answer Correct ?    1 Yes 1 No

Question { CTS, 17109 }

difference between regression testing and re testing?


Answer

Regression Test:

Regression testing attempts to verify that modifications have not caused unintended adverse side effects in the unchanged software (regression faults) and that the modified system still meets its requirements.

Re-test:

Whenever a fault is detected and fixed then the software should be re-tested to ensure that the original fault has been successfully removed.

Re-testing and Regression Testing:

It is imperative that when a fault is fixed it is re-tested to ensure the fault has indeed been correctly fixed.

There are many tools used in a test environment today that allow a priority to be assigned to a fault when it is initially logged.

We can use this priority again when it comes to verifying a fix for a fault,particularly when it comes to deciding how much time to take over verifying the fix.

For example if you are verifying that a has been fixed in a help file, it would probably have been raised as a low
priority fault.

So you can quickly come to the conclusion that it would probably only take a few minutes to actually verify the fault has been fixed.

If, however a high priority fault was initially raised that wiped all of the customers stored data, then you would want to make sure that sufficient time was allocated to make absolutely sure that the fault was fixed.

It is important that consideration of the possible consequences of the fault not being fixed properly is considered during verification.

Another important factor when it comes to testing is when there is suspicion that the modified software could affect other areas of software functionality.

For example, if there was an original fault of a field on a user input form not accepting data.

Then not only should you focus on re-testing that field, you should also consider checking that other functionality on the form has not been adversely affected.

This is called Regression Testing.

For example; there may be a sub-total box that may use the data in the field in question for its calculation.

That is just one example; the main point is not to focus specifically on the fixed item, but to also consider the effects on related areas.

If you had a complete Test Specification for a software
product, you may decide to completely re-run all of the test cases, but often sufficient time is not available to do this.

So what you can do is cherry-pick relevant test cases that cover all of the main features of the software with a view to prove existing functionality has not been adversely affected.

This would effectively form a Regression Test.

Regression test cases are often combined to form a Regression Test suite.

This can then be ran against any software that has undergone modification with an aim of providing confidence in the
overall state of the software.

Common practice is to automate Regression Tests.

To assist you on what to additionally look for when re-testing, it is always a good idea to communicate
with the Developer who created the fix.

They are in a good position to tell you how the fix has been
implemented, and it is much easier to test something when you have an understanding of what changes have been made.

Is This Answer Correct ?    0 Yes 0 No

Question { Belly, 26814 }

What is the first test in software testing process
a)Monkey testing
b)Unit Testing
c)Static analysis
d)None of the above


Answer

unit testing is the first test in software testing process.

In Unit Testing, Different modules are tested, against the specifications produced during design for the modules.

Unit Testing is the first level of dynamic testing and is first the responsibility of the developers and then of the testers.

Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

In UNIT TESTING, we have to do the following checks
1. Field level checks.
2. Field level validation
3. User Interface check
4. Functionality check.

1. Field Level Checks:

In Field Level checks, we have to do 7 types of checks.

1. Null characters.
2. Unique characters.
3. Length
4. Number
5. Date
6. Negative values
7. Default values.

2. Field Level Validation:

Here we have to check
1. Date range check.
2. Boundary value check.

In date range check, we have to check whether the application is accepting greater than the system date or not.

In boundary value check we have to check a particular field with stand in the boundaries.

3. User interface Check:

In User Interface check, we have to check

1. Short cut keys
2. Help check
3. Tab movement check
4. Arrow key check
5. Message box check.
6. Readability of controls
7. Tool tip validations
8. Consistency with the user interface across the product.

4. Functionality checks:

Here we have to check

1. Screen functionality.
2. Functionality of buttons, computation, automatic generated results.
3. Field dependencies.
4. Functionality of buttons.

In functionality check, we have to check, whether we are able to ADD or MODIFY or DELETE or VIEW and SAVE and EXIT and other main functions in a screen.

Here we are checking whether Combo box drop down menu is coming or not.

While clicking 'save' button after entering details, checking whether it is saving or not.

While clicking 'Exit' Button should close the current window.

Automatic result generation like, for e.g. When entering date of birth, system.

Is This Answer Correct ?    1 Yes 1 No

 [1]   2   3   4   5    Next