I had a couple of reasons for shying away from using "CustomerTests".
Well, because we have Programmers and Customers on an XP Team, I like the terms Programmer Tests and Customer Tests. I like the terms so much that I published them on my XP Playing Cards -- which cost many thousands of dollars to publish. I've also had to explain to many folks why I no longer use the terms Unit and Acceptance tests. So I'm really not in favor of yet another name change, but I'm willing to see how your experiment goes. --JoshuaKerievsky
AgileMethods prescribe customer satisfaction as a condition for conclusion of programming episodes. ExtremeProgramming asks that the conditions be made concrete by enumerating specific test cases. Further, these cases are to be available to developers so that they can be sure to work on what is required and, by implication, not work on what is not required, even if it might be so in the future. These tests have gone by a variety of names.
My main concern is that these kinds of tests get written in the first place. In practice, I've found that customers on XP projects often don't make time to write these tests. So I view the naming of these tests from an ownership perspective. I like the name Customer tests because it highlights that Customers own them. It does not go far enough to say what these tests do, but I can live with that. Does that make sense? --JoshuaKerievsky
The traditional XP equation has ProgrammerTests + CustomerTests = all the test you need. I don't buy this. For example, where does performance testing fit into this? I can think of lots of useful tests that don't fit into either of these categories.
Interaction Tests. The customer has specified tests for two features. Does the customer also have to specify tests for how these features interact?
Negative Tests. How well does the product handle invalid inputs? Again another area that testers tend to focus on because customers don't.
Playing Around. After each iteration, teams typically "play around" with the software. I see this as a type of testing. It's a good way to find things that may have been overlooked.
Robustness Testing. How does a system handle environmental errors, such as an overload or disk full?
CoachingTests reflect the desired behavior of the system and therefore are properly considered the responsibility of the CustomerRole. But many kinds of tests are useful because of risks and limitations of the technology itself: for these it makes sense for the programming team to take the initiative. They are one's who best understand the reasons for why such tests would be valueable. So i see tests besides UnitTests that programmers may want to do. And tests besides CoachingTests that people in a customer role may want to do.
In LessonsLearned (Chapter 3), we categorized five ways of categorizing techniques: by the people doing the testing, by the the coverage goals, by the risks you are addressing, by the type of testing activity and by the evalution techniques used to determine the verdict (whether the test passed or failed). I am concerned whenever i hear that someone is trying to define a term from one category in terms from another category. This happens if we say that all AcceptanceTests are CustomerTests. There is in implicit claim here (Customers only have to be concerned with Acceptance Testing). I'd like to see such claims made explicitly, rather than burried in the assumptions of our terminology.
In the end, your first test strategy is wrong (LessonsLearned 285). That's why you iterate. I'm nervous with any methodology that thinks it can lay out a complete testing strategy from the start.
-- Bret Pettichord
|Last edited November 13, 2002
Return to WelcomeVisitors