Where you might want to start reading ...

Is there something wrong with software architecture - or with us?

I am a software architect (one of a few) for a 20-million LOC business software, with currently a few thousand installations, developed and ...

Thursday, June 1, 2017

Prescriptive, descriptive and experimenal architectures - what you want, what you have, what you like

In the previous posting, I described a frugal model for architectural descriptions; and ended with the question for which purposes it can be used. This should have been followed by examples—but I decided that before that, I want to explain a very important high-level view concerning architectural descriptions.

The software we have at a concrete point of time during its evolution is often not the software we want. That is true for external attributes, i.e. features and non-functional qualities, but it is also true for the many architectural aspects of a software or system. From this alone it follows that we have to deal with two different sets of architectural descriptions:
  • The prescriptive architecture is the set of rules that we want some software to follow. Typical prescriptive constraints are "the GUI models must not access the database directly", "all event handlers must be aynchronous", "there must be no cycles in the dependencies of modules of type X" and the like.
  • The descriptive architecture is the set of constraints that is actually adhered to in the software. These constraints are typically much more muddy than the prescriptive ones—"the generated GUI models do not access that database, but in the startup module, a model directly reads configuration from the database, and for some plugin modules, we actually do not know whether they access the database or not" might be an honest description of some set of dependencies.
In a well-functioning architectural process,
  • the relevant aspects of the prescriptive architecture are known and documented unambiguously;
  • also the corresponding aspects of the descriptive architecture are routinely extracted from the software and documented;
  • and both are compared to detect when the latter diverges from the former at some critical point.
Maybe it should be noted that there are two categories of reasons why the prescriptive and the descriptive architecture might differ: On the one hand, the prescriptive architecture might be stable, but for whatever reasons actual development does not follow it. On the other hand, the prescriptive architecture might change, because architecture-driving requirements change. This should only highlight that it is not necessarily someone's "fault" if the two do not match.

Are we done with the architectural process?

No, we are not: When a critical difference emerges, something has to happen to align them. This might be a change to the prescriptive architecture, or a change to the actual software (which changes the descriptive architecture), or changes to both of them. In many cases, this alignment will be painful. After all, both the reasons why the prescriptive architecture is as it is, and the reasons why the actual software is as it is are profoundly embedded in the requirements and processes and people building the software. In almost all cases I have seen or taken part, planning for the reconciliation of "what we want" and "what we have" was hard and frustrating. Typically, the consequences of such a "re-architecting effort" were, and are, not at all clear for two very important factors:
  • How much would the modification cost? The potentially recursive ripple effects that one change creates could lead to a nightmare of subsequent changes, and that prospect alone often considerably reduces the possibility to get the "funding" for such a change.
  • How much benefit would the modification yield? Aligning the software with some prescriptive architecture may sound great, but there are typically good (but maybe not well-understood) reasons why the software is as it is; and so "following the rules" might actually make the software worse. The same is also true in the opposite direction: Just changing the prescriptive architecture to "what we have" may result in a set of "rules" that is so large and chaotic that following them is practically impossible.
One important reason for these uncertainties is that we mostly approach such alignment tasks with only two tools:
  • "Dive in": This works by direct modification of the software (where the goal is to keep the prescriptive architecture) or the prescriptive documents (when the software, or some aspects of it, should be kept, but the prescriptive architecture should change). For changes in the software, this is typically (and hopefully) done in a feature branch to shield the productive software from modifications whose adverse and potentially fatal consequences are seen only later in the modification enterprise.
    For changes to the prescriptive architecture, the same should be true—which requires versioned handling of architectural documents, including "branching" and "merging". As I understand it, current tools and notations are not well-prepared for this—I would shudder to find out what an automatic merge of two UML diagrams might produce. But maybe I am too feeble here.
    In sum, irrelevant of which side has to change, "dive in" is an expensive undertaking.
  • "Panorama": This approach works using informal knowledge and notations that try to capture only the essential aspects and consequences of modification variants. From these, often shaky grounds, decisions on how to proceed are derived. Typically, these are very conservative, and often limit themselves to "pilot projects" or some "drill-downs" which are supposed to be fed into another loop of the modification process.
    In sum, "panorama" often requires many iterations to get a useful result, and is therefore also arbitrarily expensive.
Both the "dive in" and the "panorama" approaches are valuable tools. However, they seem to work only with changes of a limited size. For larger systems and changes, their "sort-of-quadratic effort" (practically try out a subset of all interaction pairs between any two components) limits their usefulness and possibilities.

Thus, there should be a third possibility, namely to simulate changes on an abstract representation of the software. I think we would like to do something like the following:
If we move all the controllers into a new package, then we could separately unit-test them. Let's do it ...
... Ah, but now we see that some controllers have dependency loops with their models, and others dont. We do not want a rule 'allow loops between models and controllers'; but cleaning up all these loops right now is no option.
... But wait, it seems that the loops are mainly on trivial controllers that do not have service dependencies; whereas the controllers on top of services are typically cycle-clean ... so let's make two groups, the 'simple controllers' and the 'service controllers'.
Ok, we do it ...
... and now there are only two service controllers with loops. But could we carve out their 'looping' code into a simple controller—let's try it ...
... Ok, so we can agree on new rules for controllers: 'Simple controllers must not access services, but can have looping dependencies with their models' and 'Service controllers may access services, but must not have loops with their models'.
etc.etc.
Such an exploration into the possible changes and their consequences obviously needs to be done on a model of the architecture—a model that can be trusted, i.e., mimics the actual software; is simple, i.e. can be intuitively understood by at least the architects; can be efficiently handled by tools; and what not else (I'll have to come back to these properties later, won't I?).
To distinguish this model from the two introduced at the very beginning, I call it the (or rather, an) experimental architecture.

So, in a nice(r?) world, we end up with three architectural models of a software:
  • The prescriptive architecture—what we want (or believe that we want).
  • The descriptive architecture—what we have (for some interesting abstractions).
  • Experimental architectures—where we try what we might want and have.
But don't we have all these right now, you might ask? After all, the first is in some documents and the minds of all architects and developers, the second is "in the software", the third is on flipcharts during discussions about architecture. Of course, that's true: But I would argue that we need to be able to move information—and this will be loads of information; remember the "telephone directory property"!—between these models, and move it reliably and quickly. That's why a common notation and tooling for all three would appear worthwhile, at least to me.

To coin another term, I will call this the "escalator property" of languages for architectural descriptions: Namely that they can be "escalated" from the descriptive (what we have) to the prescriptive (what we should have) to the experimental (what we might have). And just as an escalator can also go downwards, it should be possible to map the results of an architectural experiment easily to a prescriptive architecture, i.e., to an enhanced set of rules for the system under consideration.

Examples are now really what is needed!

No comments:

Post a Comment