Coaching Tests


This is what BrianMarick is today calling acceptance tests written before the code to implement them is started. They were once "guiding tests". But I got some feeling that had negative connotations: guides boss you around, guides tell you what to look at, guides are too constraining. "Coach" already has positive connotations in agility-land. I hope it emphasizes that the tests are there to improve upon what you'd otherwise be doing, not replace it. In particular, I early gave the impression that I expected tests to replace close programmer-customer contact. All I want them to do is augment it.

Hmmm, what happened to calling them CustomerTests? --JoshuaKerievsky

I had a couple of reasons for shying away from using "CustomerTests".

  • I don't presume to tell the XP community what they really should mean when they use those words.

  • To many, CustomerTests have connotations different from CoachingTests: they don't come before the code, they're written by the customers as a way of reassuring themselves that they got what they wanted (rather than, as with UnitTests, being largely a design tool / "thinkertoy" - something I very much want from CoachingTests), and they aren't expected to give rapid feedback to programmers.

By using a different term, I can avoid tripping over connotations another term already has while exploring what I might mean. If it turns out that the two terms are close enough, one will surely vanish. Mine, I expect.


Well, because we have Programmers and Customers on an XP Team, I like the terms Programmer Tests and Customer Tests. I like the terms so much that I published them on my XP Playing Cards -- which cost many thousands of dollars to publish. I've also had to explain to many folks why I no longer use the terms Unit and Acceptance tests. So I'm really not in favor of yet another name change, but I'm willing to see how your experiment goes. --JoshuaKerievsky


AgileMethods prescribe customer satisfaction as a condition for conclusion of programming episodes. ExtremeProgramming asks that the conditions be made concrete by enumerating specific test cases. Further, these cases are to be available to developers so that they can be sure to work on what is required and, by implication, not work on what is not required, even if it might be so in the future. These tests have gone by a variety of names.

  • functional tests -- testing whole functions, not units in isolation
  • acceptance tests -- asserting conditions of acceptance
  • customer tests -- produced for (and partially by) the customer
  • guiding tests -- establishing a goal during development
  • coaching tests -- a gentler version of guiding

The mere fact that these tests are hard to name shows that we expect more from them than has been the norm in traditional software development. This is good. -- WardCunningham


My main concern is that these kinds of tests get written in the first place. In practice, I've found that customers on XP projects often don't make time to write these tests. So I view the naming of these tests from an ownership perspective. I like the name Customer tests because it highlights that Customers own them. It does not go far enough to say what these tests do, but I can live with that. Does that make sense? --JoshuaKerievsky

See CustomerTestDrivenDevelopment.


The traditional XP equation has ProgrammerTests + CustomerTests = all the test you need. I don't buy this. For example, where does performance testing fit into this? I can think of lots of useful tests that don't fit into either of these categories.

Interaction Tests. The customer has specified tests for two features. Does the customer also have to specify tests for how these features interact?

Negative Tests. How well does the product handle invalid inputs? Again another area that testers tend to focus on because customers don't.

Playing Around. After each iteration, teams typically "play around" with the software. I see this as a type of testing. It's a good way to find things that may have been overlooked.

Robustness Testing. How does a system handle environmental errors, such as an overload or disk full?

CoachingTests reflect the desired behavior of the system and therefore are properly considered the responsibility of the CustomerRole. But many kinds of tests are useful because of risks and limitations of the technology itself: for these it makes sense for the programming team to take the initiative. They are one's who best understand the reasons for why such tests would be valueable. So i see tests besides UnitTests that programmers may want to do. And tests besides CoachingTests that people in a customer role may want to do.

In LessonsLearned (Chapter 3), we categorized five ways of categorizing techniques: by the people doing the testing, by the the coverage goals, by the risks you are addressing, by the type of testing activity and by the evalution techniques used to determine the verdict (whether the test passed or failed). I am concerned whenever i hear that someone is trying to define a term from one category in terms from another category. This happens if we say that all AcceptanceTests are CustomerTests. There is in implicit claim here (Customers only have to be concerned with Acceptance Testing). I'd like to see such claims made explicitly, rather than burried in the assumptions of our terminology.

In the end, your first test strategy is wrong (LessonsLearned 285). That's why you iterate. I'm nervous with any methodology that thinks it can lay out a complete testing strategy from the start.

-- Bret Pettichord

 

Last edited November 13, 2002
Return to WelcomeVisitors