I inherited a process that followed a very traditional model, with the main forms of testing being unit tests and scripted, manual, user acceptance testing performed towards the end of a lengthy release cycle. The burden of bug finding is focussed late in the development cycle. The UAT was performed by a mix of unpaid volunteers from our community and the stable team developers at our HQ. It is a useful method of getting community members involved and communicating the quality of software to our users, but it does discover a lot of bugs that I believe could be eliminated earlier in the development life-cycle.
As you may have guessed already the lines between roles are already suitably blurred somewhat with developers running UAT tests.
Initially, in-house testing resources do not take part in testing to give the community a chance to pick the tests they want to execute. Remember, these testers are unpaid volunteers and our role is to facilitate their involvement. Dictating to them what they should do may push them away. The HQ developers come in pretty late mopping up with tests that are maybe more difficult to execute or are of a more technical nature.
During the recent implementation of a major release, I was asked to run a second iteration of the UAT. We had already delayed the release as there were some pretty big and very awesome features being implemented. I tried narrow down the scripted tests to a small subset based upon risk, which, predictably, started to take me ages and only reduced the number of tests by about 1/4. I have been keen to implement exploratory testing but I couldn't get away from two factors:
- The documentation requirement discussed above.
- I was working with specialist developers with little or no experience of exploratory testing and not experienced testers.
I decided to give session based testing a go as this would allow me to restrict the scope of testing based upon risk, mainly areas of the system affected by recent bug fixes, and produce some metrics and documented tests as required by the organization that I work for. After stating my case I was given the green light to go ahead. We used the bonfire plugin for Jira to record our test logs and link issues and I decided to limit the sessions to 1 hour.
We ran 4 sessions in one week, our developers spent the rest of the time, time that they would otherwise have spent executing scripted tests, fixing and retesting bugs. The results were very, very good. In those four sessions we identified roughly as many bugs as we had in the previous iteration of testing. As I've just mentioned, we freed up developers to assist with fixing bugs instead of testing the scripted tests.
The biggest problem that I faced was the fact that it was hard to time-box the sessions. The two factors contributing to this were:
- Due to their other work commitments, it was difficult to get everyone running their sessions concurrently. I didn't put a stop to this because the extra bug fixing was incredibly influential in minimizing the delay to release.
- Once they had started testing they found it hard to stop. They obviously enjoyed doing it, I wasn't about to discourage them from doing that either.
I documented a lightweight test plan for the next release today in our wiki. On the back of a successful, opportunistic implementation during the previous release, session based testing features heavily in that test plan, albeit much earlier in the release.