Wednesday, March 31, 2010

The 'estimates meeting' and early communication between Dev and QA

Whether doing Release planning or Iteration planning, you are entirely reliant on the estimate provided by Dev and QA. (release planning refers to what Features will go into a given release, where iteration planning refers to what User Stories will be coded/tested in each iteration).

Your development team may be good at designing, coding, and testing software, but this doesn't necessarily mean they will be good at providing accurate estimates for these tasks. This is a hard thing to train on-- estimating is a soft skill that typically comes with experience in the field.

Like many other challenges in agile development, the art of estimation all comes down to COMMUNICATION.

I've been on projects where estimates were done separately, with little or no communication between teams. QA, for example, had no insight as to how Dev arrived at their figures. Dev had no insight into how QA arrived at their figures. The lack of communication showed in the discrepancies of our numbers.

When Dev and QA don't work together on estimates, assumptions and oversights are the norm and things get missed.

We realized this and started to do estimates together. The 'estimates meeting' turned into something much larger than estimates alone-- the conversation forced us to dissect the design and discuss the possible ways of implementing the feature. And, of course, our estimates became more accurate.

This turned into the first in-depth discussion around new Features. It flushed out issues that would have otherwise gone unnoticed until further down the development process.

Tuesday, March 16, 2010

The challenge of automating dynamic web applications

Consider automating something like Search Results on a web page. As data changes, the order of items in the result list will likely change with it. This can pose a great challenge for your automation engineer in terms of creating robust and maintainable tests. It can cause your tests scripts to "think" they've found the right object, where in fact it is working with an impostor.

Pages like Search Results are becoming more and more common in modern web application. Elements on the page are dynamic and move around in unpredictable ways-- especially in applications that employ social networking or Web 2.0 types of functionality.

The area of test automation where you identify elements on the page is called OBJECT MAPPING.

One of the biggest differences between automated tools is how they handle object mapping. Some are "smarter" than others and can use multiple properties to narrow in on the correct object. But there is no magic bullet for the out-of-the-box solutions. The automation engineer will need to configure (or develop) his own solutions in order to successfully automate these types of pages.

If you have a cooperative development team, see if they are willing to assign static HTML IDs to your moving targets-- it can greatly simplify this task. Make your case to development as to how automation will benefit them.

Inevitably, there will be some pages where it falls on the automation engineer to come up with his own solution. The fail safe here is to make sure your automated tool can access the page source (the HTML), so that you can parse it and locate your object. As long as you can access the HTML, the control you are looking for is not out of reach. Know your regular expressions.

Wednesday, March 10, 2010

PRINCIPLE OF QA #99: Separate your stylization tests from your functional tests

There are many principles in software development that can (and should) be applied to software testing as well. One of those principles is called the Separation of Concerns.

The basic premises of SoC is to divide out parts of an application into their own objects (or layers if you are thinking at the architectural level), and to keep conceptually similar parts together. A simple example of this is CSS. When you use CSS, you have separated the style and formatting from document content-- the HTML. In this case, the stylization is considered one concern, where the document content is another concern.

So how does this apply to QA? When decomposing requirements into test cases, you often enter a gray area where you need to define where one test case ends and the next begins. Sometimes this dividing line is clear cut and obvious, but other times not so obvious.

A guiding rule that can help you draw this line is to separate STYLIZATION from FUNCTIONALITY.

Create a set of test cases that are concerned only with the style and formatting of the application. Think of these as your CSS test cases. Keep these tests separate from those that cover the actual behavior, or functionality, of your application.

Doing this has several advantages:

1. Mindset
It allows the test case designer and test executor to concentrate on one thing at a time.

2. Maintenance

Stylization and functionality tend to change at different times, so separating them makes test case maintenance easier.

3. Automation
If you are automating your tests, you are mainly automating your functional tests. While some aspects of stylization can be automated, it is always better to execute these with a human eye.

4. Agility
The stylization is often not ready to be tested at the same time as the functionality, so allocating tests for execution is easier (this is especially relevant for Agile testers).

Monday, February 8, 2010

Arriving at User Stories in a roundabout way

I was once a QA Lead on a project that was transitioning from Waterfall to Agile. Instead of a big handoff between QA and Dev every 2 weeks, we were trying to get to the point where QA was testing right alongside Dev (testing things as soon as they were ready on a daily basis).

The biggest challenge with moving to Agile was not a technical one. There was really no technical differences between the two (accept maybe a bigger emphasis on Unit and Automated tests). No, the biggest challenge with moving to Agile was COMMUNICATION.

The problem was that the handoffs had become so quick, and the cycles so short, that it was hard to accurately describe what was done (ready to be tested) and what was not done.

Dev would tell QA, “This part is done and ready, except for this part over here and that part over there.” And QA would find bugs, but we’d also find ourselves questioning whether the code we just tested was actually ready to be tested.

We realized this and tried to remedy it. QA and Dev began to meet every morning. We moved our seats around so that everyone was sitting in the same area. But still, something was missing.

It all came down to the GRANULARITY OF REQUIREMENTS.

With Waterfall, our Requirements Documents served their purpose well. Dev and QA could practically speak in terms of which pages of the Requirements Doc would be completed in a given iteration.

But with Agile, the Requirements Document was not granular enough. The identifiable chunks of functionality were too large to be able to use as conversational pieces. We needed something that would separate out a smaller unit of functionality, something that would be fast to write up and easy to pass back and forth between teams.

This is how we arrived at User Stories. User Stories turned out to be the perfect utility for the task at hand. They were just the right size to encapsulate a small unit of functionality, and allow us to accurately describe what phase of the process that unit of functionality was in.

We found User Stories in a roundabout way. Typically a development team chooses to go with User Stories because they find it is a better way to record requirements. We landed on them out of a need for a communication device.

More on User Stories to com.