The minimum set of tools you'll need to do effective automated testing consists of Rational TestManager and Rational Robot. If this is your first time, start with these and become thoroughly familiar with them before you think about learning to use other tools. Some people advocate learning all the automated testing tools at once. That's fine if you have weeks to spend training before you need to start testing, but I've found that more often then not, testers are expected to jump in and are given deadlines as soon as they're given the software.
Go through the tutorials that Rational Software provides. While the tutorials aren't the definitive guides as far as training is concerned, they do get you familiar with the software as well as the Rational terminology. Both of these are important. As you'll soon find out, TestManager and Robot are very large and complex tools with feature upon feature. The tutorials will familiarize you with those features you'll be using most.
If you feel you still need more familiarity with the tools after you've completed the tutorials, attend a Rational University class or hire a training consultant to come in and spend some time with you. Having a basic understanding of the tools is essential. Make sure your whole team has had some form of training or some reasonable amount of time playing with the tools before you try to do any real work. (I find that programmers pick up the tools very quickly, while nonprogrammers struggle with some of the programming concepts and need more time.)
Have at Least One Programmer on Your Team
Rule number one for efficient automated testing is to have at least one real programmer in your testing-automation group. You'll soon find out that automated testing is code development. While it's not Java or C++, you're still building a system of scripts, data files, and libraries. Robot's record-and-playback feature offers quick solutions for the most common tasks and controls, but for an advanced level of testing or for any custom controls, you'll need to be able to write your own code in SQA Basic. That means employing programmers, not manual testers who learn to code as they go.
You'll also find it useful to develop standards for your automated-testing team. This is just as important in testing as it is in conventional software development. Your test system will develop more rapidly and will be easier to maintain if you establish and enforce naming standards, coding standards, environment standards, and procedures for error and defect tracking. Having these standards documented will also allow people new to the project team to come up to speed faster.
Naming standards for scripts, test logs, directory structures, datapools, and verification points help to keep everyone on the same page. On two of my last three projects, we maintained more than 1,000 scripts and 5,000 datapools for each project. A good naming standard was the only thing that made that possible.
Coding standards should also be developed and enforced. For most companies this is easy -- you can just steal the standards used by the development staff. If you don't have this advantage, go online and find some. They're out there, and you can find a reasonable set of standards in about 15 minutes. Once you've used them for a while, you can customize them to fit your needs and the needs of your company.
Environment standards should ensure that the computers you use all have the same operating system, RAM, hard drive space, and installed software configurations. The only difference should be which Rational Suite you have installed on them. I've found that many of my hard-to-find and expensive-to-fix bugs have been due to the fact that a script was developed on a computer that had more resources than the one on which it was executed.
Procedures for error and defect tracking should describe how to log errors in scripts, submit defects via ClearQuest, code workarounds into scripts, and remove it all after a bug is resolved. I didn't figure this out until a few projects ago. My team had a lot of problems communicating when it came to finding and reporting bugs. One of us would log a bug in ClearQuest and develop a workaround in the script without communicating to the rest of the team what we'd done. Inevitably, someone else would test the bug when it was fixed, mark it resolved, and never remove the workaround in the code. Sometimes things would work themselves out, and sometimes they wouldn't. Almost always this lack of communication caused confusion and rework, and cost the team time. After an audit, we found that 10% of our scripts tested absolutely nothing because they were spotted with workarounds that were never removed.
Document your team's standards, and be sure your team knows the standards and follows them.
Figure Out What You're Testing and Keep It Simple
You know what the application-under-test does, how it looks, and how to use it, but do you know what you want your automated tests to test for? The next step is to figure out and document exactly what you're testing. (Hint: With TestManager and Robot, you'll be testing either simple functionality or performance. You can test other things with other tools, but we're starting with the basics here.)
Figuring out what you're testing and keeping it simple is the most important step as far as political success or failure goes. A common mistake when automating for the first time is biting off more than you can chew and consequently missing deadlines or having to work unrealistic hours in order to meet them. Either of these situations will demotivate your testing team and make them look bad in the eyes of the rest of the development team.
Just like in conventional software development, success in testing depends on developing good requirements -- that is, arriving at reasonable goals for what you plan to test. Start small and keep things simple for your first-time automating. In future implementations you may want to go crazy and automate everything, but by starting small now you'll minimize possible rework later when your technical corridor widens as you add more Rational tools to your arsenal. Also, prepare the team and management for the fact that the team may not meet their deadline. Communicate that manual testers for testing what you're automating should be somewhere in the budget. After your team has had some successes, this can be relaxed.
When deciding what to automate for your first time, start with small milestones.
- If you're testing a GUI or Web application, start with testing simple functionality. This could include verifying that all the correct controls exist on the screen, the proper fields enable/disable when actions are taken, and such.
- If you're automating performance testing, start with just one virtual user and set your goal at a low number (no more than twenty). When you get one virtual user to work, double this and get two to work. Keep doubling this until you get to twenty. Each increment could present a new set of challenges.
Whatever you choose to test, make sure that it doesn't span more than one part of the application-under-test or more than one or two Web pages. Ideally, you should be able to use Robot's record-and-playback feature to perform this basic testing. The record-and-playback script will then become the baseline moving forward.
Robot and TestManager also have a lot of more advanced features such as the ability to add delays and timers, the ability to distribute testing on different machines, and the ability to create graphs for just about everything. Stay away from these as much as possible at first, because they'll just confuse you. Only after your team has had some successes should you explore these useful and often necessary features.
Once you've decided what you're testing, you should baseline a script as mentioned above, using Robot's record-and-playback feature to do as much of it as you can. This is the feature I use the most, at the start of nearly every new project. It's the fastest way to baseline what I'm doing, and it gives me most of the information I'll need about the application I'm testing.
Using the record-and-playback feature may be tricky, though, depending upon the language in which your application-under-test was written. A full list of the programming languages it supports can be found in the user documentation for the test suite you're using. I've worked with the record-and-playback feature in Java and C++ and find that it works well for most standard objects, and it does a good job on Web pages, but it may have problems when processing custom controls. I don't know of any software groups that use only standard controls, so you may run into this kind of problem, too.
Remember that developer you were sure to include in your group? Now is when she or he can be most useful. The developer needs to interface with your development team and, if necessary, Rational Technical Support to work out a solution to scripting your custom controls. More often than not, it will simply mean adding a property along with custom SQA Basic code to one of the libraries in your Rational project.
Look for Ways to Modularize Your Script
Now that you have your baseline script, grab a good software architect - or your team of testers and a large whiteboard. Start looking through the code in the script for repetitive SQA function calls, sets of function calls, or other common actions. What you're doing is looking for ways you can modularize your script.
Ideally, you want to optimize your script so that maintenance is as easy as possible. I've found that I've never regretted spending too much time developing a powerful and robust script, and I've often kicked myself for taking shortcuts in development. A script will cost you more to maintain than it will to create, unless you develop the script just as you would the software it's testing. Do it right the first time and reap the rewards in all of the following iterations of the project.
After you've planned out what modularization you can do, implement it using the SQA Basic libraries. Create as many different libraries as necessary. More than likely, you'll carry these over to following projects, and they'll evolve and change as you do this. I always find it helpful to wrap as many SQA Basic functions as I reasonably can. (To wrap an SQA Basic function is to create a library function that calls the SQA Basic function.) This comes in handy when you need to work around a bug later on down the road.
Document why you designed things the way you did. Document what each library does, and what each function in it does. All of this documentation is useful as training material or for future reference, and it helps you keep track of lessons learned. I document as much as time allows. Sometimes I get it all, and sometimes I can't document anything. I've never regretted documenting any information, but on more than a few occasions I have regretted not having any clue about how something worked or why I made a certain decision on a project I'd been away from for a couple of months. Sometimes documentation is the only thing that can save project scripts that no one has worked on in a while.
Last but not least, use datapools in your testing. Effective and cost-efficient automated testing is data-driven. Keith Zambelich's whitepaper "Totally Data-Driven Automated Testing" is a must-read for anyone doing automated testing. Data-driven testing simply means that your test cases and test scripts are built around the data that will be entered into the application-under-test at runtime. That data is stored by some method and can be accessed by some key used in your scripts.
What this means in terms of the Rational tools is that you'll create datapools using TestManager (be sure you learn how to do this when you go through the tutorial or when you attend a Rational seminar) and your Robot scripts will then contain links to these datapools. At runtime your scripts, using keys designated when you created the datapools, will access the datapools and populate the application-under-test using the data found there.
This method offers the greatest flexibility when it comes to developing workarounds for bugs and performing maintenance, and it allows for the fastest development of large sets of test cases. After your first few projects, you may choose to use some method other than datapools, such as Excel or a database. When you're starting out, though, TestManager's datapools are the easiest and fastest tool you'll have available. They're simple to use and understand, and they give you the power to create data-driven tests right out of the box.
Even if you follow all the steps above, you'll still struggle the first time you attempt automated testing. Just remember to follow this road map and you should survive:
- Only use TestManager and Robot.
- Have at least one real programmer in your testing-automation group.
- Develop standards for your team.
- Figure out and document what you're testing, and keep it simple.
- Use Robot's record-and-playback feature to baseline your scripts.
- Modularize and build reusability into your scripts. Write wrappers around most functions and put them in libraries. Call wrapped functions whenever possible.
- Document everything you're doing to the greatest detail as time allows.
- Use a data-driven testing technique (a.k.a. datapools).
Remember, the way you automate your testing will change as you include more of the Rational tools, get more experience with them, and read about more complex and innovative ways of using them. TestManager and Robot should be your biggest investment if you develop traditional desktop software. If you develop real-time or embedded applications, you'll quickly move away from these tools. If you do more performance testing or Web testing, your focus will shift as you include the other tools in your Rational Suite. But regardless of what kind of automated testing you do, TestManager will likely be included, and Robot is useful more often than not.
- "Totally Data-Driven Automated Testing" by Keith Zambelich (Automated Testing Specialists Web site)