If you read part 1, you may be wondering how my automated check performed…
The programmer deployed the seeded bug and I’m happy to report, my automated check found it in 28 seconds!
Afterwards, he seeded two additional bugs. The automated check found those as well. I had to temporarily modify the automated check code to ignore the first bug in order to find the second. This is because the check stops checking as soon as it finds one problem. I could tweak the code to collect problems and keep checking but I prefer the current design.
Here is the high level generic design of said check:
Build the golden masters:
- Make scalable checks - Before test execution, build multiple golden masters per coverage ambition. This is a one-time-only task (until the golden masters need to be updated per expected changes).
- Bypass GUI when possible – Each of my golden masters consist of the response XML from a web service call, saved to a file. Each XML response has over a half a million nodes, which are mapped to a complex GUI. In my case, my automated check will bypass the GUI. GUI automation could never have found the above seeded bug in 28 seconds. My product-under-test takes about 1.5 minutes just to log in and navigate to the module being tested. Waiting for the GUI to refresh after the countless service calls being made in the automated check would have taken hours.
- Golden masters must be golden! Use a known good source for the service call. I used Production because my downstream environments are populated with data restored from production. You could use a test environment as long as it was in a known good state.
- Use static data - Build the golden masters using service request parameters that return a static response. In other words, when I call said service in the future, I want the same data returned. I used service request parameters to pull historical data because I expect it to be the same data next week, month, year, etc.
- Automate golden master building - I wrote a utility method to build my golden masters. This is basically re-used code from the test method, which builds the new objects to compare to the golden masters.
Do some testing:
- Compare - This is the test method. It calls the code-under-test using the same service request parameters used to build the golden masters. The XML service response from the code-under-test is then compared to that of the archived golden masters, line-by-line.
- Ignore expected changes - In my case there are some XML nodes the check ignores. These are nodes with values I expect to differ. For example, the CreatedDate node of the service response object will always be different from that of the golden master.
- Report - If any non-ignored XML line is different, it’s probably a bug, fail the automated check, report the differences with line number and file (see below) references and investigate.
- Write Files - For my goals, I have 11 different golden masters (to compare with 11 distinct service response objects). The automated check loops through all 11 golden master scenarios, writing each service response XML to a file. The automated check doesn’t use the files, they are there for me. This gives me the option to manually compare suspect new files to golden masters with a diff tool, an effective way of investigating bugs and determining patterns.
1 comments:
Subscribe to:
Post Comments (Atom)
Hi Eric! I'm the Test Manager at Zoosk. I've had some of the same ideas about using bug seeding to test our automation runs. After seeing a presentation at StarWest on Chaos testing by Gareth Bowles, we started working on an internal tool called ChaosLemur. This is a tool designed to break our test environments, specifically to see if our automation catches the breakages. Our logic follows the basic philosophy of Chaos Testing - why wait for something to break to see if your systems catch the error? Break it now and test your ability to respond. ChaosLemur is designed to show us where the weaknesses in our automation are, so we can build better testing and have a more reliable end product. It also helps to build confidence in our automation solution.
Gareth has encouraged me to open source this tool. Hopefully it will be publicly available later this year.