It's agile with a great twist. 6.5 Developers and 1.2 QA*. That's a 5.417:1 ratio of Dev to QA.
*hey, at least I can create a project plan for 20% of your time each week, right?
We follow a BDD implementation, so BA (singular) creates a considerable portion of the Acceptance Tests, then QAs (plural, just) spruce these up a bit in to Given|When|Then Acceptance Tests attempting to adhere to sensible syntax. From there the syntax is then implemented by the developers. And voila, testing is 90% complete for the iteration. All before code gets written too, what could be better?
It's already sounding like picturesque, nirvana-like fairy-land - so I feel obliged to mention that like every project, and every implementation, this hasn't come without blood, sweat and tears in equal measure. In fact, at present, it's not coming along at all :(
So, immediate shortfalls.
We didn't have a single person responsible for the clarity and quality of the implementation. Every test was created to work for a single story and, from what I've seen, things have become over-complex. So terribly fiendishly over-complex.
This brings me to Lesson #1 - keep it simple: an Automated Test's implementation should have a single function for every single action, have no code duplication, have no/minimal assumptions, have no obscure dependencies, and be simple.
To achieve this requires a bit of rework for each new Story Automated, which leads to Lesson #2 - have a Test-Implementation Champ. Someone who will be whipped should things get to complex, and who can then re-whip across the chain to make sure everyone's implementing with this in mind.
Next problem. We automated just about everything. We have automated and supported those little features which, if were found via manual testing, would be raised and flagged as a 'won't fix'. Lesson #3 - prioritize what should be automated.
Our biggest problem however has been the reliance on UI tests (in conjunction with the above two problems). We have 700 System tests which run at a rate of over 200 per minute, these test just the model. These tests are surprisingly robust. We also have 1300 UI Tests which test a mocked up view which allows us to test the model underneath. These take 6 hours to run. These break. Often. 'Are the tests failing again'. 'Yep'. 'Who broke the tests?' 'That looks hard to discern.' 'Who will fix them?' 'Someone else'. 'Who?' 'I dunno, but that pile of broken stuff definitely wasn't due to my check-in.' And so on. Lesson #4 - choose where the tests get implemented wisely. Achieve your goal with the smallest cost possible.
Problems such as the ones mentioned above tend to snowball regularly; tests no longer provide value to the developers, bugs slip past because no one's watching the build screen any more, failures hold up releases, etc. Really, I hear everyone's complaints: 'the tests are useless'. But all of these issues can be rectified, and the project has just entered a 'fix it' phase. I'll aim to keep semi-regular posts going to share how we're overcoming these issues.
Fix it we shall.