Thursday, July 12, 2012

The Method Used for Our Session Based Testing

I didn't really go into much detail about the methods and techniques we used for session based testing in my last post. So this is a follow up post with the details of how I actually ran the test sessions. I used a very simple process with three phases per iteration:

  1. Planning
  2. Execution
  3. Retrospective
In the planning phase I considered the areas of test coverage with each tester tackling a core component of our system. Our core components are already defined in Jira, this gives us rapid, lightweight, low overhead traceability.

Things that I considered for coverage in the next session were:
  • Testers feedback given during the previous retrospective.
  • Areas of the system under test that had been fixed since last time - would therefore require some testing.
  • Areas of known risk from tacit knowledge - Areas of the system where we knew there had been big changes.
  • Areas of the system under test that we knew had some gaps in the test coverage.
  • Areas to leave alone - hot spots - areas with known bugs that were yet to be fixed.
In the planning phase I also gave an informal brief to the testers and sought feedback on the coverage plan.

The execution phase was time-boxed to an hour. We used Jira as our tracker and the Bonfire plugin to record session logs.

At the end of a session, in our retrospective we discussed:
  • Areas of the system under test that the testers felt needed more testing - this information gets fed into the next session plan.
  • Areas that were OK - also gets fed into the next session plan.
  • Feedback on how we would do things better next time.
  • Feedback on how to better use our tools that also encompassed new ideas and tips and tricks when session based testing.
  • Feedback on how session based testing was going, mainly to ascertain buy-in from the testers.

Monday, July 9, 2012

Implementation of Session Based Testing

I've already mentioned communication. We can't use verbal communication for everything as there are people working on our software in many places throughout the world. We do therefore have a big documentation requirement. So although open source development starts out as being incredibly agile; for a larger open source projects, where many more people are involved in more locales, it becomes inherently less agile than an in-house project where everyone you need to communicate with is at the other end of the office.

I inherited a process that followed a very traditional model, with the main forms of testing being unit tests and scripted, manual, user acceptance testing performed towards the end of a lengthy release cycle. The burden of bug finding is focussed late in the development cycle. The UAT was performed by a mix of unpaid volunteers from our community and the stable team developers at our HQ. It is a useful method of getting community members involved and communicating the quality of software to our users, but it does discover a lot of bugs that I believe could be eliminated earlier in the development life-cycle.

As you may have guessed already the lines between roles are already suitably blurred somewhat with developers running UAT tests.

Initially, in-house testing resources do not take part in testing to give the community a chance to pick the tests they want to execute. Remember, these testers are unpaid volunteers and our role is to facilitate their involvement. Dictating to them what they should do may push them away. The HQ developers come in pretty late mopping up with tests that are maybe more difficult to execute or are of a more technical nature.

During the recent implementation of a major release, I was asked to run a second iteration of the UAT. We had already delayed the release as there were some pretty big and very awesome features being implemented. I tried narrow down the scripted tests to a small subset based upon risk, which, predictably, started to take me ages and only reduced the number of tests by about 1/4. I have been keen to implement exploratory testing but I couldn't get away from two factors:

  1. The documentation requirement discussed above.
  2. I was working with specialist developers with little or no experience of exploratory testing and not experienced testers.
I decided to give session based testing a go as this would allow me to restrict the scope of testing based upon risk, mainly areas of the system affected by recent bug fixes, and produce some metrics and documented tests as required by the organization that I work for. After stating my case I was given the green light to go ahead. We used the bonfire plugin for Jira to record our test logs and link issues and I decided to limit the sessions to 1 hour.

We ran 4 sessions in one week, our developers spent the rest of the time, time that they would otherwise have spent executing scripted tests, fixing and retesting bugs. The results were very, very good. In those four sessions we identified roughly as many bugs as we had in the previous iteration of testing. As I've just mentioned, we freed up developers to assist with fixing bugs instead of testing the scripted tests. 

The biggest problem that I faced was the fact that it was hard to time-box the sessions. The two factors contributing to this were:
  1. Due to their other work commitments, it was difficult to get everyone running their sessions concurrently. I didn't put a stop to this because the extra bug fixing was incredibly influential in minimizing the delay to release.
  2. Once they had started testing they found it hard to stop. They obviously enjoyed doing it, I wasn't about to discourage them from doing that either.
I documented a lightweight test plan for the next release today in our wiki. On the back of a successful, opportunistic implementation during the previous release, session based testing features heavily in that test plan, albeit much earlier in the release.

Tuesday, July 3, 2012

Detective work

Sniffing out testing leads is something that good testers become very good at. Communicating with developers and users or liaising between the two is straight forward in a standard office environment. Put thousands of kilometres and different time zones between them and it becomes a lot more difficult to keep the lines of communication open. Being proactive is a must.

There are three areas in particular that I like to keep an eye on to spot potential bugs that may require some detective work - although I wouldn't ever limit myself to just these. We have an issue tracker that is open for the world to use, chat rooms and user forums. People with different skills and language abilities will be using the tracker so you may have to investigate what they have raised further, be a champion for bugs that you feel aren't being investigated thoroughly and add more detail if required. Assist and facilitate as much as possible.

Chat clients are also useful to keep an eye on what the - global - development team are discussing. I find this most useful for spotting areas that require more testing than others i.e. areas posing a potential risk. If people are chatting about a particular area of the system in a negative way, it may be a part of the system that I should be having a nosey at and targeting with an exploratory test session.

If your Open Source organization hosts user forums, subscribe to them and play an active role. I'm normally pretty choosy about what I get involved in, we have a community manager to deal with general enquiries and the developers help out with technical issues from other developers. Sometimes though, the problems that users report are bugs and I use my experience and bug sniffing skills to locate such posts. 


During a recent major release of our software, a community member posted about a problem that they had noticed with our QA and Demo sites without raising a bug. I was looking out for lines of enquiry that may yield bugs and this particular post caught my attention. It was dismissed by several team and community members as just being bad data from an upgrade, this had apparently happened in the past. Of course, straight away, that got me thinking about whether there was a problem with the data integrity of our web application during upgrades. Sure enough after an epic testing session with me and one of our senior developers and some further interrogation of the original reporter, we discovered that under certain circumstances the upgrade process was duplicating the unique ID's of some records. This in turn was causing an unhandled exception in the web application when accessing some but not all of the records. The bug was duly fixed and the community member was thanked for the spot.


When faced with the strained lines of communication; help out where required, read between the lines, ask the sources of your leads - your snitches - lots of questions, be on the constant lookout for new lines of enquiry and give credit to your snitches when their information points you in the direction of a win. They may be more likely to continue to feed you with information in the future.

Friday, June 29, 2012

Challenges of testing an open source application

How do you test something where most of your human resources are unpaid volunteers; volunteer developers and testers? What do you do if you want to use agile development practices but your team works in different time zones all over the world? We have to make it easy for anyone contributing their time for free to do so. So how do we do agile development and testing with external volunteers, from all over the world?

These are the sort of questions that you could face if you work as a tester for the HQ of a successful open source software organization.

We can't tell a volunteer, open source developer how to write their software, if we are too strict with someone then we might discourage them from contributing. They may have developed software for us within another organization - a partner or user organization - that has it's own development processes. We can't insist that they are agile if they work for an organization that doesn't use an agile development process. In such cases we can't expect test or behavioural driven development and our "early" testing often can't be early enough.


My aim is to share my experiences and techniques that I have used and any other thoughts and software testing resources that I find useful (or maybe even not useful) with the world. I hope that anyone working in testing for an open source organization, or anywhere else for that matter, will find the information that I provide in this blog useful.