Tuesday, 21 December 2010

New Assignment - A Financial Adventure

Sometimes, some things do fall into place. I'm a week into my new assignment, trying to learn as much as I can about stock exchange, financial instruments and trading strategies - most of which is very new to me. More importantly, however, I have been hired to implement SBTM on a system test level and train the service organisation in ET. Fascinating stuff, and I'm thrilled to be here.

The customer develops and sells a stock market trading client, packed with plugins, features and market connections. They have come a long way with automated regression tests, and are beginning to realize the benefits of complementing that with manual, exploratory, testing. Initially, I will be doing that in a system test sprint early next year with the help of a team from the support department.

The support staff has been involved in testing in such a manner before, but under much looser guidance. With my help, the goal is to make the test period more productive by setting some structure, giving some coaching and establishing some form of informative reporting.

Right now, I'm in the midst of
  • organizing tool support
  • setting up a test lab (configuration of a number of workstation, places to sit, etc)
  • putting together a test plan, writing a set of charters
  • gathering input from the dev teams, assessing risks and need for regression testing
  • preparing users and test environments
  • establishing a process for testing, debriefing and reporting
  • working on a curriculum for testing day 1, where Michael Albrecht and myself will be instructing and coaching the team in SBTM
  • worrying about test environments, users, external test systems, test data, ...
Fun? Hell yes.

Thursday, 14 October 2010

Metrics and Session Based Test Management

First off, props to tester superstar and my colleague Ann Flismark who just gave a talk at SAST on the implementation of session-based test management together with James Bach and Michael Albrect of AddQ. If you happened to listen to them and found it interesting, you can read more about the everyday practicalities below (the tool we use, for example).


Current focus: metrics. There's a lot of talk in the test organization on KPIs and comparisons. Having recently moved from script-based to exploratory testing, we face a problem of bringing our old metrics to our current way of working.

By "old metrics" I mean things like
  • number of testcases per requirement (planned/passed/failed)
  • number of automated test cases vs number of manual test cases
and so on.

Are these metrics interesting? There is no easy answer. On one hand, counting the number of executed test cases is meaningless since a test case is not really a measure of anything. On the other hand, these numbers are deeply rooted in all levels of the organization and should be treated with a certain amount of respect.

Also, it is not uncommon for my client to be involved in project with other companies and external stakeholders. They often approach the test organization with questions like "how many test cases have you planned for this feature?" or "what is the pass rate of tests for feature so-and-so?". One way would be to slap everyone around with the hard truth that we no longer count test cases, since we have none. That also means, however, that we need to educate everyone in our way of working (which, of course, is the long-term solution). Another, more instant, approach would be to provide other, comparable, metrics. Is this something we can do? Yes and no.

The fundamental requirement on us testers has been worded as "you need to provide metrics". Having pondered this request long and hard, as well as googling for "metrics in exploratory testing", "session based test management metrics", etc (and not finding much, I might add), we have arrived at a couple of conclusions.

Primo, we collect metrics that are valuable to us and that we can
1. use to better ourselves and increase our efficiency
2. show our stakeholders once they are up-to-speed on what session-based test management is all about
These include things as time spent in session, time spent setting up test environments (and other tasks), and how these evolve over time.

Secondo, we collect and compile metrics that can be translated to and compared with traditional figures. These include, for example, requirement coverage, test coverage and test session complexity.

The thought process behind producing and displaying metrics has led to a somewhat more rigid and refined test management process - something we constantly strive for.

The Process - Improved
Each sprint starts with a day of planning where we decide to gnaw our way through a number of stories. Each story exists as a high-level requirement written by a project manager and is labeled with a number.

During the test planning, I, as a tester, go through the stories and for each
1 draw a sketch of the use cases, components involved, how it ties to the rest of the system, etc, on a whiteboard
2 get the developer (and, possibly, project manager) to detail the sketch with me, point out things I may have missed, and give input on risk areas and what to focus on while testing
3 cover the sketch with post-its, each holding a charter of 1-2 sentences e.g. "test the write-to-file throttling functionality" or "regression: test the auditing of this-and-this data to backend"
4 estimate the number of sessions I will need to cover each of the charters

I then update the relevant playbook accordingly, and bring the charters into my nifty tool - a simple text file will hold my test plan:
story;charter;no of planned sessions
9;test the write-to-file throttling functionality;1
and so forth - one line per planned charter.

When I'm ready to do a session, I visit the tool and am instantly presented with the current test coverage. I pick a charter that is planned but untested (or not tested enough), and get to work (simply clicking it will give me a session report template with the basics already filled out). As I check my reports into our version control system, the test status is updated automatically.


The tool then shows me
  • what stories we cover in the sprint
  • what stories I have adressed by planning tests (writing charters) for them
  • what charters I have planned for each story
  • how many sessions I have planned for each charter
  • how many sessions I have planned for each story
  • what charters I have covered with sessions
  • test work left for this sprint
And, of course, the good ol' stuff like
  • how much time I have spent in session (and on various other tasks of my choice)
  • how that time has evolved over time
  • how many sessions I have spent testing a certain component or functional area
And another thing, which is something of an experiment at this stage .. session complexity. The thoughts behind it were something like "the session report contains the steps I took during the session, such as
  • went through use-case UC12 with user account F18, verified audited data in table T_11K
... couldn't each such step be translated to a scripted test case? At least the headline of a scripted test case, so we can count them ...".

So now we count them. Is that good? I'm not sure. But it's comparable. If someone asks me "how many test cases have you run for feature so-and-so?" I could say "17" if I don't feel like giving the whole here's-how-exploratory-testing-works-lecture. If that number is meaningful to them, why not? I believe in taking small steps, and making sure everyone understands why something is the way it is.

So what about failed test cases? Well, we traditionally counted passed/failed test cases based on the test run during the end of our sprint. Every failed test case would result in a bug report. If that bug was serious enough it would get fixed, we would run the test case again and set it to passed. If it wasn't a showstopper, the bug report would stay open after the end of the sprint. Short-cutting that whole process, we could translate "failed test cases" to "open bug reports after end of sprint".

As always, this is in an experimental stage. We hope to be on the right path. Do you spot anything missing? Are we ignoring important metrics or measuring wrong? How do you do it?

 Just because I like visualizing, here's a bird's eye view of my SBTM tool in its entirety:


... with links to previous sprints on top, followed by a summary of all the time entries I have written (setup time, test time, etc). We talked about the big red-and-green graph earlier, and the little one just below shows requirement coverage (what stories have we planned for, and whether they are adressed by planned test charters). The blue graphs show trends for certain time entries (setup time and test time being the most interesting), how the time spent is distributed among tasks and by day over the sprint. At the bottom are links to all session reports, sortable by date, covered component or area.

Wednesday, 8 September 2010

On Improbability

This summer I have read a magnificent piece of literature entitled The Black Swan by empirical skepticist (or was it skeptical empiricist?) Nassim Nicholas Taleb. Its subtitle is "The Impact of the Highly Improbable", and this is exactly what it deals with.

I won't post an exhaustive summary or review of the book, as it has already been done quite well by others, but perhaps a quick introduction is in order for you to follow the rest of this post.

A Black Swan, in this context, is a highly improbably event. Not impossible per se, but an event for which we are completely unprepared because it lies beyond the border of our imagination. 9/11 is a recurring example, or World War 1 - or the discovery of black swans back when man knew only of white ones, for that matter.

The author is a former trader, and so a lot of the reasoning and examples come from the world of finance. The theories are however applicable to most situations - testing, for instance.

Taleb's ideas boil down to a list of concrete advice for minimizing the impact of negative black swan events; making the swans grayer. The gist of it is make yourself more aware of - or at least less intimidated by - the "unknown unknowns", the things that you don't know that you don't know. Yet. And we have all been there, yes? A sprint that doesn't quite go exactly according to plan because we didn't consider every last dependency within the system, we found a showstopper bug just a little too late, two developers were home sick for three days, priorities were changed mid-sprint for this or that reason. But we continue to plan, and we continue to fail.

In my office, I've seen a trend in moving away from detailed time estimates. We used to sit down in the beginning of every sprint, voting for hours on each task and trying to match the available hours. Now, we have an hour-long meeting every week where we go through the backlog and vote for story-points on every new story (and revise our guesses from last week). After a few sprints, we are starting to get an idea of how many points we can churn through in three weeks.

Our morning meetings have moved from crossing off numbers on post-it notes to sharing your "gut feelings" about the tasks at hand. Neighboring teams have implemented a "fist of five" or thumbs up/down voting for their sprint tasks to give the scrum master an idea of how the work is progressing.

Considering the inaccuracies of time estimates, I strongly advocate an approach where less time is spent on guesswork. We are never going to be 100% correct in our time estimates, so let's not put too much effort into them. That hurts less when we're wrong.

How do you do plans and time estimates, and what happens when the unexpected occurs?

Thursday, 5 August 2010

Exploratory vs Scripted

About six months ago, the company where I consult decided to make the switch to exploratory testing. It has been an exciting journey, and I feel very fortunate to have been there along the way - learning plenty, and hopefully contributing equally.

Recently, the discussions have circled around whether the new way of working is better than the previous. A natural reaction. For instance management, as well as others who happen to read our test reports, have started to wonder about the change of information provided regarding our test results.

This review process has spawned a few highlights that I figured I'd share with you.

Streamlining the Daily Work
Is streamlining still a buzzword? Perhaps I should just call it "cutting the crap". Anyhow, we seem to agree that the actual testing activities haven't changed all that much - at least if we compare with the best and brightest parts of the scripted methodology. Allow me to explain.

With our earlier way of working - I refer to it as "scripted testing" just to give you a feel for it - the work during a sprint followed this rough chronology:
  1. discuss new feature or component to be developed with project manager, tech lead and developers
  2. ponder possible test cases and risk areas on a fairly high level
  3. receive a non-final version of the software from the developers and
    • install it in a test environment while looking for flaws in the installation procedure, associated database scripts, etc
    • start the software, make sure it can communicate with other parts of the system
    • use the software, see how it works in practice, take notes of possible inputs and related outputs
    • distill the knowledge acquired in 3.3 into scripted test cases with clear action-result steps
  4. iterate all of 3 until we have reached a version that is "ready for test" (often around when the sprint is about to end)
  5. compile a test suite using the newly created test cases from 3.4 together with an assorted selection of older test cases that cover other, possibly affected, areas for regression testing purposes
  6. mark test cases as passed or failed and put the results into the test report

I guess this seems somewhat familiar to most, with a few modifications here and there. Where we are now is something more along these lines:
  1. discuss new feature or component to be developed with project manager, tech lead and developers
  2. draw an overview of the feature or component with all paths to other parts of the system and all connections to any actors, producers, consumers, etc that have a part in the relevant use-case(s)
  3. use the overview to identify risk areas, oracles, testability deficits, dependencies, etc together with developers and architects
    • compile all new knowledge into a playbook for the feature or component, formulate charters to focus the test effort
    • receive a non-final version of the software from the developers and
    • during one or more recon sessions, explore the installability and operability of the software, find how it works in practice
    • during one or more analysis sessions, following the charters defined in 4, further explore the software learning as much as possible about it - paying extra close attention to shaky/complex/unstable/risky areas that will need to be tested more carefully; also, look for possibilities to automate parts of the testing, e.g. to provide test data or parse log output
    • during one or more coverage sessions, following the charters defined in 4, use all of our knowledge and skill to cover as many of the software's possible uses as possible to find as many bugs as we can
  4. iterate all of 5 until we have reached a version that we (testers, project managers, other stakeholders) are satisfied with
  5. compile the session reports to a complete test report for the work done during the sprint

Let's compare point 3 of our scripted methodology with point 5 of our current, exploratory, approach. These are the steps where we really put our little brain cells to use and channel all of our test expertise into finding bugs and ironing out the kinks in the software. And it is this part of our work that I claim is not all that different now. It has changed, however, and in a most crucial way. This is how:

With a scripted approach, the testing that we do is tainted by the fact that we ultimately need to compose scripted test cases, with easily re-testable action-result instructions. Naturally inquisitive as we may be, eager to explore and track down elusive bugs, we run the risk of being trapped in this mindset and restrict our testing too much.

If we enter the testing with an exploratory approach, the work will be more directed towards finding bugs rather than producing test cases. We then adapt the reporting to what we have done, rather than changing what we do to fit the reporting.

No Nonsense Reporting
We have struggled a bit with trying to get our test reports to reflect our actual work, as I have written about in the past. The old test report format, which was based on our scripted labor, had an understandable appeal in that they were easy to understand. We claimed to have executed 114 test cases, out of which 4 had failed. A good percentage, one might argue. I want to point out that, from a personal perspective, I find such measurements tremendously useless. Not only is there no record of what the test cases cover, there is also no indication as to how they have been executed. Test case instructions could have been misunderstood by the tester, or even incomplete to begin with. However, the reports were easy to understand at-a-glance, and that is one of the more important aspects for our readers, the stakeholders.

What we want to keep is the simplicity of the test report. We want the reader to be able to understand, within seconds, what the results of the tests are. At the end of a sprint, we testers usually have the best understanding of the state or quality of the software. We can tell you how complex the changes have been, how many bugs we have found, where the risks lie ... and it is that understanding that we need to distribute through the test report. Personally, I feel better doing so in other terms than in a nonsensical number of test cases.

The Loose End
Making a change, as we have, brings up a lot of questions. Particularly by those not directly involved, but who might still be paying for it in the end. Changes cost, but we do them because we hope to gain something more in the end. We have been asked things like "How is your session based testing better than what you did before?". That is hard to measure. Do we compare the number of found bugs? The number of incidents? The perceived well-being of the testers? The amount of time spent testing instead of managing test case instructions?

Pending a more thorough investigation by KPI gurus, I'm inclined to say that the last couple of things listed above are the more important. A happy tester that can spend the better part of his or her time testing will be more familiar with the software, have a better understanding for how the software can be - and is - used, and will find more bugs.

The Upside
There have been a few other positive side effects by this transition. For instance, we have started tracking our time in a more detailed way. It now takes us seconds to figure out how much of our time during the sprint that has been used for testing, or setting up environments, or reporting bugs. We could have done that without the adoption of session-based testing, but it would have been a much greater effort.

The "playboards", our whiteboard-based playbook embryo, allows us to communicate with developers, architects and testers with greater ease because we have something to talk about. We can physically stand around a common visualization and point, talk, draw and erase.

Also, we spend more time doing what we know and love - test

Friday, 9 July 2010

Big Screen Testing

Vacation is approaching, and most of Sweden has entered the traditional July coma. This could mean a slower pace and longer lunches for those of us still at the office, or it could mean using the extra time to try out some new exciting things, or to finally get done all the things that we normally can't find the time for.


Today, friday, I grabbed two testers and occupied the newly installed video conferencing room down the hall. It is equipped with two 60" screen LCD monitors and not much more. Well, a web cam and sound system, but that's not relevant. We brought two laptops and hooked them up to the screens and network and I drew a crude sketch of the system under test (an application that has received a few fixes for handling communication errors) and described the relevant scenarios to my colleagues.

We tossed up a handful of console windows for log monitoring on one screen, some tools for traffic generation on the other, and away we went!


We spent about 90 minutes in the session (and about the same amount of time setting everything up ...). I noticed about the same advantages as with the pair-wise testing that I talked about in an earlier post. It was pretty neat when we encountered a problem/oddity that we wanted to question a developer about. It awoke the interest of no fewer than three developers (did I mention the summery slow pace at work?) and they could all gather around our 2x60" screens with ease and discuss the issues. Other than that it was a rather ineffective experiment, taking into account the time it took to get everything set up ... and considering that we will probably not be able to make it a permanent installation, we will probably leave it at that - an experiment. Fun, though, and educational - I'm still very pro large screens, immersion and collaboration. A whole-hearted team effort is hard to beat.

Tuesday, 29 June 2010

Cinematography

Video - an excellent way of
  1. recording instructions
  2. explaining the outcome of a test case
  3. clarifying odd behaviour in the application while testing
  4. ... and more, I'm sure.
I've grown fond of CamStudio, mostly because it was among the first result when I googled for "screen recording software". Again, it's not about using the best tool, but using the one that suits you best.

Every now and then I get questions along the lines of "we saw this weird thing live where this and this happened ... can we reproduce it in a test environment?". Instead of having the inquisitor perched upon my shoulder while I try to recreate the symptoms, I will say "give me a minute or two and I'll see what I can do". Then I record the window where the weird behaviour is supposed to occur, and try to reproduce it with the bits of information I have. When (sometimes if) I succeed, I save the clip, snip out the relevant bits and send it to the person asking "is this similar to what you were expecting?". It beats a screenshot and two paragraphs of text any day.

    Friday, 7 May 2010

    Visualizing and Reporting Sessions

    Visualization is key.

    I talked about visualizing the test planning recently, with the aid of charts and graphs etcetera. That is all well and good, but not worth a lot unless we, after testing has ended, can present the results in an easily understandable way.

    Something that is traditionally expected from a test report is the number of test cases: planned, executed and passed. This information is not horribly valuable, much because a "test case" as such is not a gobally recognized unit of measurement. A "test case" has no size, no value.

    To say that "I have executed one test case, and it passed" says nothing of the quality of the tested product. Nothing indicates that the test case even remotely covers any of the changed code.

    Additionally, an inherent problem of scripted test cases is that a single test seldom finds any new bugs. It may find a bug the first, and maybe even the second, time it's run, but by test run 98 it's a weak, regressive confident-booster at best. If you want to find bugs in the software, scripted test case will help you very little.

    Our approach to visualizing test results was to illustrate the changes and risks in the software divided into logical areas and sub-components. This is then mapped to test coverage or test effort during the sprint. Allow me to examplify.

    Our team develops components C1, C2 and C3 as part of a larger project. New features are planned, implemented and tested during the course of a three-week sprint.

    The three components interact - with each other, and with other components in the system - and we have divided their functionality into eight logical areas; A1..A8.

    Of course, the words we use are more intuitive. We call the components by name, and the areas are of type "login", "auditing", "robustness" or "registering a player". To keep is simple, I'll use the A/C abbreviations for now.

    At the beginning of the sprint, we bring out the "risk" matrix:


    For each change to a component, we'll make a mark in the row corresponding to the affected area. Some areas do not exist in some components.

    The matrix is kept up-to-date during the sprint, to account for bug fixes or planned changes that grow unexpectedly, and gives us a light-weight "heat map" which guides us in focusing our test sessions.

    In the matrix above, we have worked with areas 2, 3 and 5. We planned, for instance, two bug fixes in component 1 related to area 2.

    Just like in the "visual test plan", earlier, we try to cover development activities with test activities. This heat matrix is complemented by an identical matrix where we make a mark for each test session covering a certain area in a certain component.

    The result is a graph where the sum of the changes to each area is shown, and compared to the amount of testing made in the same area.



    This helps us focus the testing where it matters, and we have found it to reflect our "gut feeling" of the state of testing after a finished sprint pretty well.

    In addition to the test coverage information, we also display the amount of time we have spent in test sessions, reporting bugs and setting up test environments, compared to the time spent out of session. The ambition is to identify the time thiefs, and to bring the time spent in test sessions to a maximum. More on this in a later post!

    Friday, 23 April 2010

    Visual Test Planning

    I have a habit of drawing crude sketches of any system, feature, function, component or use case that falls on me to test. These sketches help visualize the flow of information in the system, and may reveal components that affect or are affected by the actors in the use case. Primarily, I use it for myself as a "map" of the system/object/area under test after having shown it to the responsible developer and having all the question marks on it explained to me.

    Now, we try to get a head start on testing by involving ourselves in the development process as early as possible, by inviting ourselves to meetings and eavesdropping on developers and architects when they discuss requirements with each other or with the project manager. During an early presentation/discussion meeting with an architect and responsible developers, and a clarification on a whiteboard, it quickly became apparent that their mental image of the, in this case, new feature came pretty close to what my crude sketch would have looked like right before I dove into testing. It seemed natural, then, to combine the two.

    This map - as a concept, not in its original incarnaction - has evolved in our team during a couple of sprints, and now is a recurring presence on the team whiteboard. In lack of a better word, I'll call this map the "system overview" where the word "system" means the software which we are interested in testing right now, be it an entire application, a new feature or something else.

    The system overview will then be the basis of what James Bach would call the "coverage heuristic", and the activity where we - the testers - start jotting down test targets. We identify all points in the system where information is entered, stored or manipulated. We track down all oracles - interfaces where we can access said information - via logs, in the database, through a web GUI, and so on. We flag all communication channels and ponder on how the system will and should behave if we "cut the cord" (robustness testing).

    The next step will be to produce charters that cover the points describe above. Depending on the complexity and importance of any particular point, it may require more charters before we consider it sufficiently covered.


    Every pink note, above, represents a charter. The orange notes are bugs that need to be investigated (black dot) or fixed (blue dot).

    We then consider the number, and types, of test sessions needed to exlore the charters in a satisfying way. If the charter is new and unknown to us, it will probably require at least one recon session just to familiarize ourselves with the code. If the charter seems to be complex enough, it will probably require several analysis sessions to cover it. We also consider the likelihood of finding bugs, and account for the time needed to revisit the charter.

    As with all things, practice makes perfect - after a few sprints, the ability to reliably predict the number of sessions needed will grow. It is of course important to follow up on the estimates done during sprint planning both during the sprint - to be able to warn or re-plan if the estimates don't hold - and after the sprint - to improve the estimating skills for the next sprint.

    Friday, 9 April 2010

    Taste Testing

    The other day, I found this little tidbit which I thought fit nicely with the theme of this blog.


    The note says "I have taste tested and serviced your coffee machine" and is dated and signed by technician.

    Testing coffee machines or testing software, it's all the same basic concept. In this case a regression test of coffee machine functionality and test of end-user experience after having performed maintenance work on the system. Using taste buds instead of log files is just a matter of selecting the best suited tool for the job.

    Wednesday, 31 March 2010

    Guided Recon Session

    We - the team - are currently working on a new application which is an extracted subset of an already existing system component. In other words, we - the testers - are already familiar with the domain, the use cases and the functionality on a conceptual level. It still needs to be tested, though.

    In its earliest incarnation, I deemed the application too unstable to be the target of any real test sessions. Thus, before unleashing the band of bugthirsty testers at my team's disposal, I was fortunate enough to be able to sit with the developer for a couple of days, deploying new versions to the test environment, getting an introduction to an administrative interface, being able to ask questions and getting immediate feedback - a priceless method, if you can find the time and resources.

    After these most educational sittings, and getting the application to install, start and perform according to specifications in most cases, I decided to spread my recently acquired knowledge to the other testers. Also, I feared that my close-knit relationship with the developer might have caused some of his love for his own software rub off onto me, making me more forgiving, or even oblivious,to some of the application's quirks and weirdnesses. Another two pairs of eyes would be worth their weight in gold, and more.

    This led to a guided recon session with two testers performing the actual work with me in the background guiding them on a very loose leash, taking notes and answering questions.


    Although not an approach I would recommend for everyday purposes - three testers on the same case is not always cost efficient - the benefits of doing this during a one-hour recon session were obvious:
    •  we had six eyes on the same problem area and thus less probability of any anomaly slipping through unseen
    •  it quickly became apparent that our three different views on reality and what is considered "normal" (in the domain under test, specifically) enabled one of us to pick up on bugs and other strange behavior that the others did not notice
    •  the de-briefing became instant - if a tester came across something that might be worth investigating, that was immediately brought up and could be discussed (for, say, a minute) and with the added experience of the sherpa and the other tester we could quickly determine whether it was worth exploring right away, or should be recorded for further investigation during another session

    This approach was also very well suited for a recon session, where problems with setup, new behavior, etc could be solved immediately by the sherpa, allowing the tester to advance more easily.

    Tuesday, 30 March 2010

    A Taste for Test

    I am a tester.

    Employed by a medium-sized (150-or-so employees) consulting company in Stockholm, Sweden, I currently work with software testing at a large online gaming company. The product is transaction-intensive, geographically widespread, and used by players on one end and administrators on the other, both with high demands on availability and response times.

    The testing methodology we apply, and a personal favourite of mine, is largely exploratory and charter-based, drawing a lot of inspiration from The Church of Bach. At the time of writing, we have recently had a personal visit from James Bach who spent a day with us, discussing our implementation of exploratory testing, our use of charters, and delivering many helpful tips and examples. I am fortunate enough to spend my days at a very liberal workplace, where new thoughts and ideas are always welcome and the staff is well motivated and unafraid of change - if, of course, for the better.

    This culture gives us testers lots of room to explore not only software, but our own processes and strategies as well. As mentioned, we are currently in the midst of chiseling forth an exploratory way of working which means running into a lot of obstacles, but also overcoming said obstacles and spawning brilliant solutions, convenient shortcuts and new ways of collaborating. In the words of James Bach, "Are you writing about this?". Well, I am now.

    This blog is a window into my workday. If any of the above tickles your interest, or if you are curious about different approaches to exploratory testing or are looking for input on different kinds of test management, you will hopefully find some use for the things I write.

    My intention is not to preach anything as an "absolute truth", and I don't think anyone should, but to add a trickle of thoughts to the ocean that is software testing and to, hopefully, tickle your imagination and curiosity in new ways and, primarily for those less experienced in the field, stimulate your Taste for Test.