More questions anyone ?
There is a lot of talk about testing ; What is it ? How should it be done, by whom and when ?
But how do you put a testing process on the map ? How do you think about ‘quality’ ? What is it ? How do we define it ?
How do we measure it (if we can measure it at all) ? What’s already there ? What is missing ?
During my first days at King’s studio Phoenix, I watched the teams through the eyes of an agile tester. How could I, as a testing professional help, to not make, but also to maintain a kick-ass game ?”
And in order to set up a plan, I needed input …. preferably a lot of it and there are 2 great ways to get that ; talk to the team on one side on dive right in and do some testing on the other one. Start with a blank canvas, listen to people, note down questions they have, add questions and remarks you have, ideas, worries, problems, …
There are a lot of good ideas and healthy input swirling around but there also a distinct lack of how we can get those ideas into a mission and strategy in order to actually start executing them. All of it valuable information to fill my canvas on 3 major topics :
Mission, strategy and planning
People say you need a good map to reach a difficult location. But a good map is useless if you don’t have a location to go to. Both need to be in place. And there are a lot of things to consider :
- Do we have a workable strategy / mission / test plan ?
- How good is it ?
- What is the update ratio of your test plan ?
- Who tests what where and when ?
- What do we measure and why ?
Of course, the more questions and things you note down, the more additional elements come up :
This is both a goal and a means to an end. You cannot follow your strategy if you don’t make sure you can actually test your games, but your strategy should also define what testability is.
It also includes information on what we want from a build, from a release and what we are sending to other teams and people.
- How do we make sure a build in testable within the team and within the sprint ?
- What sign-off criteria should we use before sending out a build to other testing parties ?
- Do we have the right environment set up ?
- What is ‘perceived quality’ ?
- Do we have all the tools and are we using them to the best of their and our abilities ?
Measuring and reporting
And finally there is the big question “When is it good enough ?” When can you stop testing ? Ask any tester and he or she will say ‘never’. We don’t run out of test cases or ideas, don’t guard the gates to a release, … For us there is always more to do but where do we draw the line between ‘more to do’ and ‘deliver optimal value’ ?
- How do you report these things ?
- What do other team members and stakeholders want to see reported ?
- What / When is “Done” ?
You have to start somewhere in this mountain of feedback so next up will be to identify some of the first order measurements ; Things we can do quick and easy, with a low cost and a high gain / visibility to team and test improvement. So up next will be how we define these steps and make the proper tasks out of them.