Answer Posted / radhika
Here are some suggestions:
1. Engineer Web experience quality into your product.
Don't just makes changes on the fly like we all did in the
past. (If you've ever built a Web application in Perl, you
know what I'm talking about.) Take the entire experience
into account up front, from the moment you conceive the
application. Ensure that your release criteria include
specific performance and reliability metrics that you can
measure often during and after development.
2. Know (and manage) what feeds into your customer's
experience. Many organizations understand the concept of
third-party Web services but still test only the content
they themselves serve up. Keep tabs on every factor that
affects your customers' experience, including third-party
data and services. And remember: just because a third-party
Web service works well for some of your customers doesn't
mean it will for all of them.
3. Know your customers, their profiles and their usage
patterns. What kind of browsers do they use? What kind of
machines? How do they connect to the Internet? Where in the
world are they located? What are their usage patterns,
(e.g., days, nights, weekends or certain paths through the
application)? All of those factors affect the customer
experience, so make sure your application will work well for
your customers. And just because your third parties deliver
well in one geographical area doesn't mean they will in all
of them. You can't assume your third parties are as
consistent as you are.
4. Create a browser compatibility lab consisting of all
the possible browser operating system combinations users
could have, including cell phones (I plan on being one of
the first in line for an iPhone) and the BlackBerry. The
open source Selenium testing tool is a good way to automate
tests on any browser/OS combination. Selenium Remote Control
lets developers code tests in their favorite language and
operate these browser/OS combinations remotely. An
alternative tool to Selenium is Watir. Both are hosted at
OpenQA.org. And Firebug, an open source Firefox extension,
is the Swiss Army Knife for Web 2.0 developers and QA
engineers alike.
5. Capture screenshots and movies of actual tests on
those platforms so you can gain real insight into any
problems, their impact and how to fix them. This is an
emerging functionality available in very few commercial
testing tools, but it can really help when trying to
determine why an automated test failed.
6. Capture logged activity in the browser during
automated tests and during production. There's too much
application logic in the UI to ignore. Firebug provides this
support for Firefox, and Safari has built-in support.
Consider Firebug Lite for IE and other browsers. The hardest
trick is transferring logs from the browser to a persistent
storage, but the payoff is well worth it.
7. Understand the connection between performance and
perception. If a user's browser window is full, but parts of
the Web page that are off your screen haven't loaded,
perception-wise the page is complete. Consequently, testers
need to go beyond HTTP response time data, which can be
simplistic by itself, and capture information about
individual JavaScript functions, Ajax calls, objects (e.g.,
images and cascading style sheets) and HTML-specific events
to accurately measure perceived performance. This is what
really matters to users.
8. Incorporate the browser into your continuous
integration (CI) processes. Most implementations of CI
typically test server code, but they don't account for the
increasing amount of activity occurring in the browser. Even
though incorporating the browser takes a bit more time and
resources, it ensures you test on the real end-user
experience, which is critical these days.
9. Consider "on-demand" testing. Testing on real multiple
browser/OS combinations and capturing gigabytes of
performance data require much more testing infrastructure
than most organizations want to invest in. Using on-demand
testing (Software as a Service) lets you leverage someone
else's testing horsepower, architecture and setup
investment. Then you can just rent a browser lab as needed.
10. Refactor tests as Web applications evolve. Ajax has
changed how Web applications are built, and in turn it has
made their automated tests more tightly coupled to the code.
The tighter the coupling, the more attention to data
consistency and test fixtures that is required. In the old
days, defining an automated test on a Web application was
easy: every step in a use case (Joe Surfer visits
www.xyz.com and clicks on the log-in link, etc.)
corresponded to a new page view. With Ajax, things are more
complicated, so refactoring is critical.
| Is This Answer Correct ? | 2 Yes | 0 No |
Post New Answer View All Answers
Name mobile automation testing tools you know?
Differentiate web testing and wap testing.
Mention what should be the selecting criteria for Test Automation Tool for mobile Testing?
Mention few best practices for android testing.
How to create the log file?
what are the reasons for mobile app crashing
What is the team wap used for?
What are the common critical, blocker, major, minor bug founds while testing a mobile?
Is it possible to install an sd card in the simulator? If yes, then how?
Browser stack and Perfecto, cloud based site is helpful for mobile app testing ? any free cloud based site ?
Explain how A/B testing is done for ios app?
What is mt sms message?
What are the possible numbers of testings performed for a standard android strategy?
When to choose automation testing and when manual testing?
What is the full form of mms?