Thursday, 5 August 2010

Exploratory vs Scripted

About six months ago, the company where I consult decided to make the switch to exploratory testing. It has been an exciting journey, and I feel very fortunate to have been there along the way - learning plenty, and hopefully contributing equally.

Recently, the discussions have circled around whether the new way of working is better than the previous. A natural reaction. For instance management, as well as others who happen to read our test reports, have started to wonder about the change of information provided regarding our test results.

This review process has spawned a few highlights that I figured I'd share with you.

Streamlining the Daily Work
Is streamlining still a buzzword? Perhaps I should just call it "cutting the crap". Anyhow, we seem to agree that the actual testing activities haven't changed all that much - at least if we compare with the best and brightest parts of the scripted methodology. Allow me to explain.

With our earlier way of working - I refer to it as "scripted testing" just to give you a feel for it - the work during a sprint followed this rough chronology:
  1. discuss new feature or component to be developed with project manager, tech lead and developers
  2. ponder possible test cases and risk areas on a fairly high level
  3. receive a non-final version of the software from the developers and
    • install it in a test environment while looking for flaws in the installation procedure, associated database scripts, etc
    • start the software, make sure it can communicate with other parts of the system
    • use the software, see how it works in practice, take notes of possible inputs and related outputs
    • distill the knowledge acquired in 3.3 into scripted test cases with clear action-result steps
  4. iterate all of 3 until we have reached a version that is "ready for test" (often around when the sprint is about to end)
  5. compile a test suite using the newly created test cases from 3.4 together with an assorted selection of older test cases that cover other, possibly affected, areas for regression testing purposes
  6. mark test cases as passed or failed and put the results into the test report

I guess this seems somewhat familiar to most, with a few modifications here and there. Where we are now is something more along these lines:
  1. discuss new feature or component to be developed with project manager, tech lead and developers
  2. draw an overview of the feature or component with all paths to other parts of the system and all connections to any actors, producers, consumers, etc that have a part in the relevant use-case(s)
  3. use the overview to identify risk areas, oracles, testability deficits, dependencies, etc together with developers and architects
    • compile all new knowledge into a playbook for the feature or component, formulate charters to focus the test effort
    • receive a non-final version of the software from the developers and
    • during one or more recon sessions, explore the installability and operability of the software, find how it works in practice
    • during one or more analysis sessions, following the charters defined in 4, further explore the software learning as much as possible about it - paying extra close attention to shaky/complex/unstable/risky areas that will need to be tested more carefully; also, look for possibilities to automate parts of the testing, e.g. to provide test data or parse log output
    • during one or more coverage sessions, following the charters defined in 4, use all of our knowledge and skill to cover as many of the software's possible uses as possible to find as many bugs as we can
  4. iterate all of 5 until we have reached a version that we (testers, project managers, other stakeholders) are satisfied with
  5. compile the session reports to a complete test report for the work done during the sprint

Let's compare point 3 of our scripted methodology with point 5 of our current, exploratory, approach. These are the steps where we really put our little brain cells to use and channel all of our test expertise into finding bugs and ironing out the kinks in the software. And it is this part of our work that I claim is not all that different now. It has changed, however, and in a most crucial way. This is how:

With a scripted approach, the testing that we do is tainted by the fact that we ultimately need to compose scripted test cases, with easily re-testable action-result instructions. Naturally inquisitive as we may be, eager to explore and track down elusive bugs, we run the risk of being trapped in this mindset and restrict our testing too much.

If we enter the testing with an exploratory approach, the work will be more directed towards finding bugs rather than producing test cases. We then adapt the reporting to what we have done, rather than changing what we do to fit the reporting.

No Nonsense Reporting
We have struggled a bit with trying to get our test reports to reflect our actual work, as I have written about in the past. The old test report format, which was based on our scripted labor, had an understandable appeal in that they were easy to understand. We claimed to have executed 114 test cases, out of which 4 had failed. A good percentage, one might argue. I want to point out that, from a personal perspective, I find such measurements tremendously useless. Not only is there no record of what the test cases cover, there is also no indication as to how they have been executed. Test case instructions could have been misunderstood by the tester, or even incomplete to begin with. However, the reports were easy to understand at-a-glance, and that is one of the more important aspects for our readers, the stakeholders.

What we want to keep is the simplicity of the test report. We want the reader to be able to understand, within seconds, what the results of the tests are. At the end of a sprint, we testers usually have the best understanding of the state or quality of the software. We can tell you how complex the changes have been, how many bugs we have found, where the risks lie ... and it is that understanding that we need to distribute through the test report. Personally, I feel better doing so in other terms than in a nonsensical number of test cases.

The Loose End
Making a change, as we have, brings up a lot of questions. Particularly by those not directly involved, but who might still be paying for it in the end. Changes cost, but we do them because we hope to gain something more in the end. We have been asked things like "How is your session based testing better than what you did before?". That is hard to measure. Do we compare the number of found bugs? The number of incidents? The perceived well-being of the testers? The amount of time spent testing instead of managing test case instructions?

Pending a more thorough investigation by KPI gurus, I'm inclined to say that the last couple of things listed above are the more important. A happy tester that can spend the better part of his or her time testing will be more familiar with the software, have a better understanding for how the software can be - and is - used, and will find more bugs.

The Upside
There have been a few other positive side effects by this transition. For instance, we have started tracking our time in a more detailed way. It now takes us seconds to figure out how much of our time during the sprint that has been used for testing, or setting up environments, or reporting bugs. We could have done that without the adoption of session-based testing, but it would have been a much greater effort.

The "playboards", our whiteboard-based playbook embryo, allows us to communicate with developers, architects and testers with greater ease because we have something to talk about. We can physically stand around a common visualization and point, talk, draw and erase.

Also, we spend more time doing what we know and love - test

No comments:

Post a Comment