We've started a new initiative here in Stockholm - a discussion group for us "Exploratory Testers", providing a forum for exchange of ideas and war stories.
The last edition featured a dear colleague of mine who told the story of Test Fests past. Two tales, in fact, where the first test fest was held over a weekend, featuring both programmers and testers, over a stock exchange system. The second invited end-users at the Swedish parliament.
An intriguing topic, it seemed, and the small-ish but brave congregation bombarded the raconteur with questions for well over the hour.
Most appeared to leave the forum invigorated and energized, with fresh inspiration for hosting test fests of their own.
Get in touch with us through our LinkedIn group if you would like to participate in the future, and visit the same group for discussions after the actual events. We expect the next SET forum to take place sometime this May.
A Taste for Test
An agile, technical, context-driven tester trying to tickle your testing taste buds.
Wednesday, 12 March 2014
Tuesday, 13 March 2012
No Computer?
After attending one of Gojko Adzic's inspirational seminars, The Chief sent an e-mail to all testers in the office. A key point at the seminar was that "testers shouldn't be testing - they should teach testing and help identifying risks with the developers while designing, etc". The e-mail contained a single, though-provoking, question: "Could you do an entire sprint in your teams and maintain quality –
without computers?"
I wrote the following reply, which relates to the role I'm currently in, but I'm curious about how this works in other places;
"First off, what do we do during a sprint?
· Release testing – checking core functionality while keeping our eyes open for regressions in a soon-to-be-released version
· Improving the automated test harness – Jbehave tests, etc, to minimize the amount of future manual regression testing
· Testing new features – poking and prodding, making sure the developers have covered all angles
… I think that roughly covers it, though each bullet is of course more fine-grained.
Consider the traditional test levels;
· Component tests,
· Component integration tests,
· System tests,
· System integration tests,
· Acceptance tests
I wrote the following reply, which relates to the role I'm currently in, but I'm curious about how this works in other places;
"First off, what do we do during a sprint?
· Release testing – checking core functionality while keeping our eyes open for regressions in a soon-to-be-released version
· Improving the automated test harness – Jbehave tests, etc, to minimize the amount of future manual regression testing
· Testing new features – poking and prodding, making sure the developers have covered all angles
… I think that roughly covers it, though each bullet is of course more fine-grained.
I would claim that the release
testing would be most difficult without a computer – at least right now. Depends
on how fast we make progress on the second bullet.
I would also claim that
improving the automated tests would be difficult without a computer. Yes, we
could compose test descriptions on a piece of paper and have someone else
implement them. Semantics, perhaps, but sure – the tester part of that task is
doable without a computer (the rest, then, is pure coding).
More interesting is the last
bullet, the cognitive action of “testing”. The common approach, I suppose, is
reactive – we read a requirement, wait for it to be implemented, try it out in
a test environment and look for bugs. The alternative, then, would be to switch
to a more proactive approach and catch the bugs at the design stage by
analyzing requirements together with the developers, agreeing on an
implementation and “pair program”, if you will, to be able to assist with unit
and integration tests to ensure that the code is well-written the first time
‘round.
· Component tests,
· Component integration tests,
· System tests,
· System integration tests,
· Acceptance tests
They exist because we expect,
traditionally, to find different kinds of bugs on different levels. If we
switch our focus from system tests & system integration tests, where we
currently work, to the design phase & preventative work … well, I’m afraid
we are going to miss out on the bugs that appear when putting the system in a
realistic user scenario, together with all the other bits, pieces and test
data. This is more true, I believe, for the parts that have an interface to an
actual human user. I wouldn’t feel comfortable, at least, to let them go
without having seen (and tested) them with the eyes of a user.
This is the reason, I assume,
that we have employed a session-based way of testing – to be able to analyze
and critique the finished product and find the irregularities that show up when
everything is connected.
I’m not sure how the “teach
testing” weighs in … yeah, we could design tests from our models and
requirements and have someone else operate the test environment and execute the
actual tests while we monitor and mentor (without, literally, using a computer)
but I guess that’s not really the point.
So … could I do an entire sprint
in my team and maintain quality without a computer? Sure, but someone else
would have to do parts of my job on a computer. So, in reality, no - not really."
Could you do your testing without a computer?
Friday, 4 November 2011
SBTM Case Study Talk
I recently gave a talk at a conference organized by ITQ here in Stockholm, Sweden. The topic for the conference was "Just Enough Testing", and how to apply testing strategically.
I provided the agile, exploratory, context-driven flavor with my case study of how I introduced SBTM at the system test level for a feature release of a trading application.
If you attended, or not, and want to have a glance at the slides, here they are.
A fun experience, and I think it was appreciated. The topic of exploratory and session-based testing is still unfamiliar to a lot of people, and it was refreshing to socialize, during the break after my talk, with all the attendees who "had heard of exploratory testing" and were "curious to learn more about it".
I provided the agile, exploratory, context-driven flavor with my case study of how I introduced SBTM at the system test level for a feature release of a trading application.
If you attended, or not, and want to have a glance at the slides, here they are.
A fun experience, and I think it was appreciated. The topic of exploratory and session-based testing is still unfamiliar to a lot of people, and it was refreshing to socialize, during the break after my talk, with all the attendees who "had heard of exploratory testing" and were "curious to learn more about it".
Thursday, 29 September 2011
Out in the Wild
Every now and then, they let me out of the office.
Last week, I accompanied boss and colleague Michael Albrecht to a company in the outskirts of Stockholm where we conducted a course-workshop hybrid on exploratory testing in general and session-based test management in particular.
The course-part was a distilled version of AddQ's xBTM training and will be followed up by two audits during the fall.
Just the other day, I took part in aforementioned xBTM course, as part of my training to teach it. I'll be behind the podium the next time we run it - November 30.
A month before that (October 27), I'll warm up my public speaking voice at ITQ's annual conference. This year, the topic is "Just enough testing" and I'll spend 40 minutes on a case study in session-based test management.
Last week, I accompanied boss and colleague Michael Albrecht to a company in the outskirts of Stockholm where we conducted a course-workshop hybrid on exploratory testing in general and session-based test management in particular.
The course-part was a distilled version of AddQ's xBTM training and will be followed up by two audits during the fall.
Just the other day, I took part in aforementioned xBTM course, as part of my training to teach it. I'll be behind the podium the next time we run it - November 30.
A month before that (October 27), I'll warm up my public speaking voice at ITQ's annual conference. This year, the topic is "Just enough testing" and I'll spend 40 minutes on a case study in session-based test management.
Subscribe to:
Posts (Atom)