In the planning phase I considered the areas of test coverage with each tester tackling a core component of our system. Our core components are already defined in Jira, this gives us rapid, lightweight, low overhead traceability.
Things that I considered for coverage in the next session were:
- Testers feedback given during the previous retrospective.
- Areas of the system under test that had been fixed since last time - would therefore require some testing.
- Areas of known risk from tacit knowledge - Areas of the system where we knew there had been big changes.
- Areas of the system under test that we knew had some gaps in the test coverage.
- Areas to leave alone - hot spots - areas with known bugs that were yet to be fixed.
In the planning phase I also gave an informal brief to the testers and sought feedback on the coverage plan.
The execution phase was time-boxed to an hour. We used Jira as our tracker and the Bonfire plugin to record session logs.
At the end of a session, in our retrospective we discussed:
- Areas of the system under test that the testers felt needed more testing - this information gets fed into the next session plan.
- Areas that were OK - also gets fed into the next session plan.
- Feedback on how we would do things better next time.
- Feedback on how to better use our tools that also encompassed new ideas and tips and tricks when session based testing.
- Feedback on how session based testing was going, mainly to ascertain buy-in from the testers.