Where you might want to start reading ...

Is there something wrong with software architecture - or with us?

I am a software architect (one of a few) for a 20-million LOC business software, with currently a few thousand installations, developed and ...

Thursday, May 4, 2017

A role model: The (x)unit testing revolution

In order to overcome waterfall and moralistic approaches to software architecture documentation and process, one should take a look at other areas that succeeded in establishing new ways of doing things. One paradigmatic example is xunit testing.
Martin Fowler argues that what I describe here should not be called "unit testing", but "xunit testing". I follow his advice.
Two decades ago, testing had some problems that were, on a high level, similar to those of software architecture processes and documentation:
  1. Scope—what do you test, where do you stop?
  2. Documentation—how to make test cases permanently available, e.g. for regression testing?
  3. Planning—when and how long do you test?
  4. Responsibility—who does the testing?
  5. Maintainence—how to upgrade existing artefacts, i.e. test cases with their input and output? 
  6. Automation of testing—how can the repetetive parts of testing be done by software?
  7. And finally, the conceptual question—how much of quality assurance should a testing philosophy encompass?
Before the "xunit testing watershed", the answers were:
  1. The boundaries of what is tested, and where you stop, are ill-defined and arbitrary.
  2. Documentation is done by natural language or slightly formalized natural language. Formal approaches, using their specific "testing languages", are research topics and typically way overboard for almost all projects.
  3. Planning is mostly done "at the end", i.e. based on a waterfall view of software development (which is wrong), or rather, "between devlopment and delivery". This carries an extreme risk that testing is squashed between development overruns and promised delivery dates.
  4. The reponsibility for testing lies with "others"—not developers, but "testers". Therefore, there is no continuous process, with conseqeuent small granularity, between development and testing, but a "break" with corresponding hurdles in communications and planning.
  5. Maintenance is a heroic or bureaucratic effort that either fails or is expensive to keep up.
  6. Automation is done by external "test harnesses", for example "automated GUI testing tools". This is quite expensive and produces brittle test code.
  7. The testing philosophy—which has to be implemented by a testing process—should encompass as many quality assurance aspects as possible (consider the alternative:
    Test lead: "Our testing process only deals with functional expectations, but not non-functional ones."
    Manager: "Do you mean we need yet another organization to do the non-functional testing? Do you know how much your non-productive department already costs us?!?")
The net effect was that testing was essentially either a "bureaucratic" or a "moral enterprise": Either one had set up a separate organization to do the testing; or it was left to the morale of single people who—depending on which book they had read; or which catastrophe had occurred the day before at a customer—would either find testing the most important and underrated activity in the world; or an obstacle to development and shipping at that last possible minute.

The xunit testing movement (which was initiated by Kent Beck with his SUnit tool for Smalltalk) gives, fundamentally, answers that are exactly the opposite of the above:
  1. What you test are small units that are easy to handle.
  2. The test cases are code.
  3. and 4. Tests are run during development by the developers, with an arbitrarily small granularity.
  4. Maintenance is just like code maintenance.
  5. Automation is inherent.
  6. The goal is to cover only some software quality features, mostly correctness aspects. Other test topics like usability or acceptance tests are not targeted by the method.
Because of the last item, xunit testing cannot replace manual and other testing techniques—which was, of course, in the "old philosophy", an argument dealt against it. But over time, this argument eventually vanished.

In sum: Xunit testing introduced a striking new alternative on a subset of the testing problem.

Software architecture in real projects is, it seems to me, in the same situation as testing was twenty years ago:
  1. The boundaries of what is to be described by a software architecture are ill-defined and arbitrary.
  2. The description is, by the rule, done by informal text and "diagrams"—which are interpreted informally even if they use e.g. UML notation. There are formal approaches, but they are way overboard for almost all projects.
  3. The documentation is done "at the beginning", "before design sets in", i.e. with the implicit assumption of a waterfall process.
  4. The responsibility lies with "others"—not developers, but "architects"; with a resulting break in the process.
  5. Maintenance of documentation and architectural rules is a heroic or bureaucratic effort that either fails or is expensive to keep up.
  6. Automation?—there is no automation for architecture.
  7. Architecture, by definition, covers everything—sometimes limited to "everything that is important", but that does not really exclude anything.
It is obvious, from that list and from the xunit testing experience, that something can be done.

No comments:

Post a Comment