It’s true what they say; writing automated tests is waaaay more fun than manual testing. Unfortunately, fun does not always translate into value for your team.
After attempting to automate an AUT for several years, I eventually came to the conclusion that it was not the best use of my time. My test team resources, skills, AUT design and complexity, available tools, and UI-heavy WinForm AUT were a poor mix for automated testing. In the end, I had developed a decent framework, but it consisted of only 28 tests that never found bugs and broke every other week.
Recent problems with one of my new AUT’s have motivated me to write a custom automated test framework and give the whole automated test thing another whirl.
This new AUT has about 50 reports, each with various filters. I’m seeing a trend where the devs break various reports with every release. Regression testing is as tedious as it gets (completely brainless; perfect to automate) and the devs are gearing up to release another 70 additional reports! …Gulp.
In this case, several aspects are pointing towards automated test potential.
- The UI is web-based (easier to hook into)
- The basic executed test is ripe for a data-driven automation framework; crawl through 120 reports and perform nearly the same actions and verifications on each.
- Most broken report errors (I’m targeting) are objectively easy to identify; a big old nasty error displays.
I wrote the proof of concept framework last week and am trying to nail down some key decisions (e.g., passing in report parameters vs. programmatically determining them). My team needs me to keep testing, so I can only work on automation during my own time…so it’s slow going.
This is my kick-off post. I’ll explain more details in future posts. More importantly, I’ll tell you if it actually adds enough value to justify the time and maintenance it will take. And I promise not to sugar coat my answer, unlike some test automation folks do, IMO.
Oh, I’m calling it JART (Jacobson’s Automated Report Tester). Apparently JART is also an acronym for "Just a Real Tragedy. We’ll see.
"More importantly, I’ll tell you if it actually adds enough value to justify the time and maintenance it will take."Just curious if you already have your 'value' defined? Are you defining it from a purely economic perspective(ROI, time saved, repeatability, etc)? Or do you have some other definition of value?
If you don't have a definition of value, what will you use to determine if the automation "adds enough value to justifye the time and maintenance..."?
~k
I'm a little distressed about the "on my own time" comment. You seem to have done a good job convincing your team that automation isn't very useful!
Can you not make the case that freeing yourself from the tedious regression will result in much more test coverage?
K,
Ah yes, that is a good question. I guess I can't just go with my gut, huh? Darn.
How about, if (Hours spent writing it + hours spent maintaining it + hours spent interpreting the results) < (hours I would have spent performing brainless manual regression testing throughout 2009) then JART is justifiably valuable.
Maybe you can improve upon that.
Alex,
Before I try to make that case to my team, I have to believe it myself! It looks good on paper but contrary to popular belief, succesfull test automation is not a simple thing to achieve.
Fortunatly, JART is kind of fun at this stage. Not as fun as caving or woodworking, but it's up there. Scary.