Friday, 23 April 2010

Visual Test Planning

I have a habit of drawing crude sketches of any system, feature, function, component or use case that falls on me to test. These sketches help visualize the flow of information in the system, and may reveal components that affect or are affected by the actors in the use case. Primarily, I use it for myself as a "map" of the system/object/area under test after having shown it to the responsible developer and having all the question marks on it explained to me.

Now, we try to get a head start on testing by involving ourselves in the development process as early as possible, by inviting ourselves to meetings and eavesdropping on developers and architects when they discuss requirements with each other or with the project manager. During an early presentation/discussion meeting with an architect and responsible developers, and a clarification on a whiteboard, it quickly became apparent that their mental image of the, in this case, new feature came pretty close to what my crude sketch would have looked like right before I dove into testing. It seemed natural, then, to combine the two.

This map - as a concept, not in its original incarnaction - has evolved in our team during a couple of sprints, and now is a recurring presence on the team whiteboard. In lack of a better word, I'll call this map the "system overview" where the word "system" means the software which we are interested in testing right now, be it an entire application, a new feature or something else.

The system overview will then be the basis of what James Bach would call the "coverage heuristic", and the activity where we - the testers - start jotting down test targets. We identify all points in the system where information is entered, stored or manipulated. We track down all oracles - interfaces where we can access said information - via logs, in the database, through a web GUI, and so on. We flag all communication channels and ponder on how the system will and should behave if we "cut the cord" (robustness testing).

The next step will be to produce charters that cover the points describe above. Depending on the complexity and importance of any particular point, it may require more charters before we consider it sufficiently covered.


Every pink note, above, represents a charter. The orange notes are bugs that need to be investigated (black dot) or fixed (blue dot).

We then consider the number, and types, of test sessions needed to exlore the charters in a satisfying way. If the charter is new and unknown to us, it will probably require at least one recon session just to familiarize ourselves with the code. If the charter seems to be complex enough, it will probably require several analysis sessions to cover it. We also consider the likelihood of finding bugs, and account for the time needed to revisit the charter.

As with all things, practice makes perfect - after a few sprints, the ability to reliably predict the number of sessions needed will grow. It is of course important to follow up on the estimates done during sprint planning both during the sprint - to be able to warn or re-plan if the estimates don't hold - and after the sprint - to improve the estimating skills for the next sprint.

No comments:

Post a Comment