Where you might want to start reading ...

Is there something wrong with software architecture - or with us?

I am a software architect (one of a few) for a 20-million LOC business software, with currently a few thousand installations, developed and ...

Sunday, May 14, 2017

What made xunit testing successful?

The xunit revolution introduced
  • a very simple notation (actually, two notations);
  • a reasonable benefit for every developer;
  • and, later, a culture that extended "mere xunit testing" to various "development philosophies" like TDD, TDD with baby steps, or BDD.
The notations have a set of important properties:
  1. They define a small language of a few important concepts:
    • At the core, only testcases that run in a predefined test harness framework; and—almost unrelated to that framework—assertions;
    • for scalability, testfixtures and setup and teardown of test cases and fixtures.
  2. The building blocks are very small: A single assertion is atomic; a single testcase can also be made atomic (i.e. just test a very tiny segment of the intended behavior).
  3. There is a simple tool that efficiently does the mundane job of collecting and executing all notated items (test fixtures and test cases).
  4. The tool can be easily run by any developer at any time.
  5. The tool can also be easily integrated into existing automated build processes.
  6. And, finally, the automatic execution can have a drastic feedback on the processes: Tests that do not pass halt the delivery process (by resulting in a "red" build).
The direct benefit for the developer is not that more quality assurance can be done during code development—even though later "xunit philosophies" are, one could argue, roughly founded on this belief (and delivered arguably better processes for direct support of development). On the contrary, more quality assurance (in the sense of "trying to find destructive input to check a program against the limits of a specification") during development would actually be an annoyance, because it disrupts the developer's constructive thought processes necessary for constructing code.

Rather, xunit testing helps to solve the problem of "later regression checks" occurring after code changes, when it is necessary to remember and run the simple as well as the tricky test cases that actually allow a developer (or a team) to hold the belief that the modified piece of code still behaves sanely.

The important experience is that that "later" is not only "much later", when a feature upgrade or bug fix requires changing the code, but that it can be right after the next (well or not so well thought out) modification during the initial development of some piece of code. That really helps developers.

Finally, xunit testing is open in multiple ways—how many tests one writes, how much behavior each one ascertains, when they are run in the development cycle, and when in the build cycle, and, last but not least, how writing and executing of xunit tests feeds back into design and code development. Because all this is not enforced by the tooling in any way, a host of "philosophies" could emerge on top of xunit testing, leading to a lively and sometimes heated debate with a huge effect on wide understanding and on "marketing" of xunit testing.

Great.

Could the same be accomplished for some parts of "architecting"?

We should try, at least, shouldn't we?

So, you and I and everyone should start to invent notations and tools for "architecting" along the lines of what made unit testing successful. I'll leave your ideas to you; in the next posting, I'll start to present mine.

Thursday, May 4, 2017

A role model: The (x)unit testing revolution

In order to overcome waterfall and moralistic approaches to software architecture documentation and process, one should take a look at other areas that succeeded in establishing new ways of doing things. One paradigmatic example is xunit testing.
Martin Fowler argues that what I describe here should not be called "unit testing", but "xunit testing". I follow his advice.
Two decades ago, testing had some problems that were, on a high level, similar to those of software architecture processes and documentation:
  1. Scope—what do you test, where do you stop?
  2. Documentation—how to make test cases permanently available, e.g. for regression testing?
  3. Planning—when and how long do you test?
  4. Responsibility—who does the testing?
  5. Maintainence—how to upgrade existing artefacts, i.e. test cases with their input and output? 
  6. Automation of testing—how can the repetetive parts of testing be done by software?
  7. And finally, the conceptual question—how much of quality assurance should a testing philosophy encompass?
Before the "xunit testing watershed", the answers were:
  1. The boundaries of what is tested, and where you stop, are ill-defined and arbitrary.
  2. Documentation is done by natural language or slightly formalized natural language. Formal approaches, using their specific "testing languages", are research topics and typically way overboard for almost all projects.
  3. Planning is mostly done "at the end", i.e. based on a waterfall view of software development (which is wrong), or rather, "between devlopment and delivery". This carries an extreme risk that testing is squashed between development overruns and promised delivery dates.
  4. The reponsibility for testing lies with "others"—not developers, but "testers". Therefore, there is no continuous process, with conseqeuent small granularity, between development and testing, but a "break" with corresponding hurdles in communications and planning.
  5. Maintenance is a heroic or bureaucratic effort that either fails or is expensive to keep up.
  6. Automation is done by external "test harnesses", for example "automated GUI testing tools". This is quite expensive and produces brittle test code.
  7. The testing philosophy—which has to be implemented by a testing process—should encompass as many quality assurance aspects as possible (consider the alternative:
    Test lead: "Our testing process only deals with functional expectations, but not non-functional ones."
    Manager: "Do you mean we need yet another organization to do the non-functional testing? Do you know how much your non-productive department already costs us?!?")
The net effect was that testing was essentially either a "bureaucratic" or a "moral enterprise": Either one had set up a separate organization to do the testing; or it was left to the morale of single people who—depending on which book they had read; or which catastrophe had occurred the day before at a customer—would either find testing the most important and underrated activity in the world; or an obstacle to development and shipping at that last possible minute.

The xunit testing movement (which was initiated by Kent Beck with his SUnit tool for Smalltalk) gives, fundamentally, answers that are exactly the opposite of the above:
  1. What you test are small units that are easy to handle.
  2. The test cases are code.
  3. and 4. Tests are run during development by the developers, with an arbitrarily small granularity.
  4. Maintenance is just like code maintenance.
  5. Automation is inherent.
  6. The goal is to cover only some software quality features, mostly correctness aspects. Other test topics like usability or acceptance tests are not targeted by the method.
Because of the last item, xunit testing cannot replace manual and other testing techniques—which was, of course, in the "old philosophy", an argument dealt against it. But over time, this argument eventually vanished.

In sum: Xunit testing introduced a striking new alternative on a subset of the testing problem.

Software architecture in real projects is, it seems to me, in the same situation as testing was twenty years ago:
  1. The boundaries of what is to be described by a software architecture are ill-defined and arbitrary.
  2. The description is, by the rule, done by informal text and "diagrams"—which are interpreted informally even if they use e.g. UML notation. There are formal approaches, but they are way overboard for almost all projects.
  3. The documentation is done "at the beginning", "before design sets in", i.e. with the implicit assumption of a waterfall process.
  4. The responsibility lies with "others"—not developers, but "architects"; with a resulting break in the process.
  5. Maintenance of documentation and architectural rules is a heroic or bureaucratic effort that either fails or is expensive to keep up.
  6. Automation?—there is no automation for architecture.
  7. Architecture, by definition, covers everything—sometimes limited to "everything that is important", but that does not really exclude anything.
It is obvious, from that list and from the xunit testing experience, that something can be done.

Two common, and defective, approaches in software architecture

Let me deviate—or actually, approach my target from a different angle—for two more postings before presenting one such "mundane notation" for software architecture documentation (which I have promised in my last posting).

What are the main problems with current (explicit) approaches to software architecture? Very briefly, they might be dubbed
  • the "waterfall approach"; and
  • the "moralistic approach".
The first one, "waterfall thinking", is the old idea that one "starts" with deciding on basic and important architectural aspects, and "then" goes on to design and write software accordingly. Some parts of software engineering might follow this pattern, but there are at least two major scenarios—or maybe forces—that lead to a different process:
  • One is the fact that in almost all cases, a huge software is already in place; and the architectural problem is to modify this software "from inside out". This can and is often be done by small exploratory "experiments" in the software that prove or disprove whether some concept might be worthwhile. And in many cases, this is done implicitly and "under the hood", when some developer starts, on his or her own initiative, to introduce the first RESTful service, a "small NoSQL database on the side", or reuses some executable for production purposes that originally started out as a tool for developers only.
  • The second scenario is brought on us by the typically vast capabilities of commercial and open-source frameworks or tools. When you buy SQL server instead of using Postgres (maybe for external reasons, like having a partner status with Microsoft); or when you take Angular instead of some lesser-known JS framework because some graphics library ties in better with it, you also "buy into" a huge feature set that comes with that tool. Your architectural possibilities are suddenly, and at the same time, extended by the tool's many for-free features, and also limited by the grand architectural and technological lines of it. And like your software, such a toolset is often "just there", without any possibility or even wish to ponder any underlying architectural requirements and decisions.
It is by no means clear that bottom-up approaches, as done by the hypothetical developer above, aren't on par or even better than processes that proceed from "grand architectural analyses" "down" to design and implementation. And, in real life, such bottom-up situations are unavoidable anyway. Thus, "waterfall thinking", while certainly an option, should not be the only and preconceived approach to architectural decisions.

The second problem is the "moralistic approach" to architecture (and design). Architecture and design decisions produce rules: "In our system, code on the GUI layer must access the database via an intermediate DAO layer"—or the other way round; "plugin registration happens explicitly by adding an entry to the configuration, and not implicitly by merely placing the plugin at some location"—or not. And somehow, such rules must be enforced. Most of the time, there are only two enforcement regimes in place:
  • One is the "build-and-install-regime": The build and, later, installation processes of a software require that certain rules are followed. These rules are often implicit, but at least it is hard to violate them. It is also often very hard to change them.
  • The other regime is the "moralistic one"—"you should", or "you must": Without support from tools, it is assumed that developers have the capabilities to follow the current rules. When, later, some disaster happens, one can more or less easily find a person that is the "culprit": "You shouldn't have added that trigger that implicitly calls itself and then fills up the audit table!", "You should not have hardcoded that connection string, but taken it from that (faraway) configuration file to keep database accesses consistent!" But of course, people will only adhere to some sub-100% percentage of rules—and this assumes that the rules are explicitly documented and consistent to begin with. And also of course, culprit-finding does not solve problems well (it might, in some cases, prevent others from violating the same rules in the near future). And finally, we are all versed in putting the fault on the shoulders of the ultimate culprit: "This has grown historically."
Both the "waterfall approach" and the "moralistic approach" are wrong in their fundamentals. But just by saying so, there is no positive alternative in place that replaces them. And, to tread somewhat more carefully, one should certainly not throw out top-down approaches (of which "waterfall" is a special case) and rules-of-thumb (an essentially "human-compatible" method for solving problems, just like "moral") from the portfolio of process building blocks for "doing software architecture": These are worthwhile at the, well, right places.

But some alternative view on "doing it" should be possible.

Wednesday, May 3, 2017

Purposes of architectural documentation disentangled

I have been a little unfair in my last posting: The eight pages on UML 2.0 in Gorton's "Essential Software Architecture" are more than a mere advertisement for that (then) new UML version 2.0—they do actually contain some core advice about how to document architectural aspects of a program. I'll try to extract a compact view of what architecture documentation is, in Gorton's and, I think, the mainstream architecture's textbooks' view, from these pages and the case study in chapter 7.

First of all, architecture documentation is a collection of artifacts for human beings only. This is in contrast to code, which is targeted both at the "machine" and at human readers. In the background, there looms the idea of model-driven architecture, where an architecture model is used to create code—essentially, a compiler for a new language on some "higher" level than standard programming languages. However, like the book, I will disregard this aspect right now and return to it somewhat later.

The clear target of providing information to humans has lead most of us to the use of informal diagrams and standard prose to describe the architectural aspects of a software—"simple box-and-arrow diagrams", as Gorton calls them. He claims that there is "an appropriate diagram key to give a clear meaning to the notation used" in his examples, but most diagrams in his chapters 1 to 5 don't have such a key, and in any case, most people drawing such diagrams don't include one. The problem with this is that any plan to derive hard facts from such diagrams is then doomed.

Now, one purpose of architecture documentation is to give someone a "feeling of the interplay of things", and for this purpose, informal diagrams with textual or oral explanations are perfectly fine and, I am quite sure, even preferable: They appeal to our intuitive approach to most problems, which includes working with somewhat unclear terms and their relations in order to limit thinking about tricky consequences, so that our mind is free to "suck in the universe" of the problem area at hand.

Maybe it should be noted that formal clarity, precise meaning and even "simple" (mathematical) consistency entail, in almost all cases, "hard thought work", as the history of mathematics has shown:
  • Geometry in the plane seems like an easy subject, until you start trying to understand its base and algorithms from Euclid's axioms and definitions, well over 2300 years old: There is nothing easy with concepts like parallels or ratios of line segment lengths! And later formalizations, mainly from about the 1800s onwards, are even more intricate.
  • The other, apparently so "simple" basis of mathematics, namely the natural numbers, lost its simplicity also in ancient times with some prime number theory by the Greeks. It was and is by no means obvious what can emerge from simple addition and multiplication, let alone from the algebraic structures and formalizations extracted in the 19th century, leading to Gödel's mind-bending encodings and Turing's work.
Let me state this in my "Axiom 1": Mathematics, by and large, is not what we want in software documentation (and that from me, who majored in theoretical computer science ...).

Still, it seems we all want something more than the informal box-and-arrow-diagrams.

Gorton, like many others, proposes the use of UML. I cannot help the feeling that he is not really happy about it. The summary of chapter 6 has the following two sentences:
I’m a bit of a supporter of using UML-based notations and tools for producing architecture documentation. The UML, especially with version 2.0, makes it pretty straightforward to document various structural and behavioral views of a design.
"A bit of a supporter", "pretty straightforward": This does not really sound like wholehearted endorsement.

So, what is the problem?

The problem is, in my humble opinion, that there is no clear picture of what a notation for architectural documentation should do. The described use-cases typically oscillate between a "better notation" for those informal, easily comprehensible overviews over some aspects of a software system, and a more formal notation that can help derive hard knowledge about a system, with that implied goal of "generating code" in model-driven approaches.

I am, after many years in the field, now certain that we have to structure the use cases for architectural documentation in a threefold classification, with different notations for each area:
  1. Informal documentation, from which humans can learn easily and intuitively gather a common understanding and a useful overview about some aspects of the system. In the best case, such a documentation is part of a common culture about "how we name and see things." However, this documentation is not intended to derive any hard facts: Everything shown can be disputed and discussed and viewed differently, and the notation can be extended at will if it helps with that intuitive understanding. All must agree that formal arguments based on such documentation are futile and hence must be avoided.
  2. Formally sound and precise documentation that can be used to derive invariants and definitive properties of the documented system. If such documentation is used as the basis for a tool-supported model-driven approach, then there is no difference between a descriptive and a prescriptive architectural documentation for the aspects covered by the process. However, such an approach is very expensive in more than one respect:
    • First, especially without full tool support, keeping such a documentation in line with the system is much work, as even tiny changes on one or both sides require precise updates.
    • Second, as software can exhibit very complex behavior, the notation must be capable of describing many and, usually, deep concepts, which makes it hard and "mathematical" to understand and even harder to write. Such documentation therefore blatantly contradicts "Axiom 1".
    • Last, on a conceptual level, it is not really clear that such a documentation is actually "documentation" in the sense of "humanly accessible information relevant for many decisions in the software life-cycle". Rather, it might be more of a formal specification or even—when used in a model-driven process with code generation—part of the implementation, albeit (maybe) on some higher or "more compact" level than standard programming languages.
Thus, rich informal and deep formal notations are not sufficient for documenting and arguing about architectural aspects of a software.
  1. Therefore, we need notations that are somewhere in-between: Not informal, so that they can be used to derive and ensure hard facts. But equally, they must be easily usable so that they can be read and written by the average software engineer under average project circumstances. It should be obvious that this type of notation cannot be very rich and also not very abstract. Only then, it can on the one hand avoid requiring an extensive semantics for formal derivations, and on the other hand being too esoteric to be used for understandable documents. In other words, it must be a quite mundane notation. I'll show my preferred notation for this, and its uses, in later postings—just in case you think that this looks a little like the search for the holy grail.
UML, incidentally and unfortunately, does not work really well for any of these purposes if its complex semantics is taken seriously:
  1. For an informal notation, it carries a too heavy backpack of that formal semantics which no-one wants to remember when drawing informative diagrams in a running text (as, e.g., in the case study in Gorton's book).
  2. For a formal notation, it is too indirect: One needs to map UML propositions back to the underlying semantic model (like Petri nets or state machines), and only then one can formally draw conclusions; as far as I can oversee it, the number of publications that use UML as a formal base has declined quite a bit over the last years.
  3. Finally, as a simple but yet strict notation, UML is much too baroque, because it was lobbied to include every useful diagram and icon. This large notational size would recommend it for many different informal diagrams—if it weren't for that formal semantics ballast ...
But even if  you think that UML does work well (or well enough) for one area, there is the danger of misinterpreting UML diagrams: Is a diagram which your team uses as a basis for a decision a "type 1." diagram?—then it conveys informal concepts, but does not limit the decision strictly or formally. A "type 2." or "type 3." diagram, on the other hand, would narrowly limit some choices you can make—and definitely require a formally (for "type 2.") or at least collectively (for "type 3.") approved update of the diagram for any change in the software or the architecture. But most diagrams do not spell out explicitly their "conformance level".

Nonetheless, our analysts and some of our developers and architects (including me) are happy enough to use UML as a pool of symbols for sketching explanatory diagrams that help us to keep our complex machinery at least somewhat documented. So yes, I am, and we are also "a bit of a supporter of using UML-based notations and tools", as Ian Gorton puts it.

But now, I feel, I am starting to owe you an explanation how to do architectural documentation better. The next posting ... well, after I wrote it, it turned out to still consider some general observations about software architecture and how we deal with it.

Tuesday, April 25, 2017

How I did not (yet) learn how to write architectural documentation


As I promised in the last posting, here are some thoughts on architectural documentation. They crept out of my mind while I read chapters 6 and 7 of Ian Gorton's "Essential Software Architecture". For such a central topic as the documentation of an architecture, the chapters are astonishingly short. To be fair, the book is not only about the documentation of software architecture, but about all of architecture. Yet, we would expect a solid foundation of how to arrive at a good documentation. Do we get this?

Essentially, chapter 6 of the book is structured like this:
  • One and a half pages of introduction
  • One page about what to document
  • Eight pages of introduction to UML 2.0
  • Half a page on having an "Architecture Documentation Template", and a extremely high-level example of one
  • Finally, a one-page summary
As I and all my architects and all our analysts know UML 2.0, what remains are four pages of information.

Let's try to dissect the information in them:
  1. The first two paragraphs tell the true story that both feeble and massive documentation can be, and "sometimes"(?) is, "out-of-date, inappropriate and not very useful."
  2. "But there are many good reasons why we want to document our architectures, for example" so that others, "most commonly members of the design and development team" "can understand and evaluate the design"; "we [whoever we are, in contrast to the development team] can understand the design" later; others can "learn from the architecture by digesting the thinking behind the design" [I do not understand this; isn't that "evaluating the design"?]; "we can do analysis on the design, perhaps to assess its likely performance, or to generate standard metrics like coupling and cohesion."
  3. But it's hard to document, and the predominant tools are Word, and Visio and PowerPoint ("along with their non-Microsoft equivalents"), and the notation is "informal 'block and arrow' diagrams". "We should be able to do better."
  4. The second section, "What to Document", starts with an example that shows, in my opinion, a huge misunderstanding of what software architecture is:
    A two-tier client server application with complex business logic may actually be quite simple architecturally. It might require no more than an overall “marketeture” diagram describing the main components, and a perhaps a structural view of the major components (maybe it uses a model-view-controller architecture) and a description of the database schema, no doubt generated automatically by database tools. This level of documentation is quick to produce and routine to describe.
  5. I'll show, a few postings down my ramblings, that all the standard books and texts on software architecture notation were, up to now, not able to provide and explain a notation that can even capture this "simple architecture". At the moment, let's just take it that some high-level diagrams seem to be the goal of an archtectural documentation. Why? Because we are told so?
  6. However, the section tries to sum up the reasons for more extensive documentation: Complexity ["in the eye of the beholder"? measured somehow? agreed by a team based on gut feeling?]; longevity [again: how evaluated?]; needs of stakeholders. However, this still does not tell us "what" to document; only that it must be "more".
  7. Final sentence: "It’s therefore important to think carefully about what documentation is going to be most useful within the project context". I thought and think carefully, and three of us discuss—sometimes more heated than carefully—what documentation is going to be the most useful. We are not very successful at this, I have to admit.
  8. The short template, after the UML 2.0 section gets its meat from the case study in chapter 7—an interesting approach dating back to the Babylonians (teach by example, not by abstraction) which, for me, does not really work, because the key point of learning, namely how to apply it in one's own environment, is lacking.
  9. The summary again tries to morally uplift us:
    Generating architecture documentation is nearly always a good idea. The trick is to spend just enough effort to produce only documentation that will be useful for the project’s various stakeholders. This takes some upfront planning and thinking. Once a documentation plan is established, team members should commit to keeping the documentation reasonably current, accurate and accessible.
    I remain sceptical and am unable to heed this advice in our team, I have to admit.
  10. Finally, there is a summary with two main ideas: First, and I agree wholeheartedly for—right now—lack of something better, it favors UML 2.0 as the notation to use, for sketches, for "closely" modelling components and objects, and for "exact" modelling for model-driven development.
  11. The other idea in the summary is to have a repository for documentation, with some sort of "automatic documentation production", for whatever purpose and with an unknown functionality.
10 items of information about how to document a software architecture. I do not want to dispute most of them. I have indicated above where I think the text is wrong (and I hope to explain this in more detailed later postings). But here, I want to find out how to proceed with our documentation: And for this, I have not gotten many clues. Why? See my next posting.

Sunday, April 23, 2017

Is there something wrong with software architecture - or with us?

I am a software architect (one of a few) for a 20-million LOC business software, with currently a few thousand installations, developed and delivered in a Scrum process with monthly iterations, a development team of more than 40 developers, analysts, and testers. The software is in a reasonable state—we have no problems delivering new features, we have no problems integrating new team members, we envision major overhauls of the software (ranging from virtualization to running the hitherto GUI-driven application server as a SaaS system and integrating more DWH functionality into a database designed for a pure—and partly time-critical—OLTP load).

We do not have a valid software architecture documentation.

We have, I think, a solid software architecture: We are able to immerse new team members into our culture in a way that keeps our main architectural constraints in place—layers, tiers, synchronous and async calls, normalized and denormalized data, communication with central systems about licences and failures. The performance, the usability, the stability of the whole system has never been a fundamental problem: Standard design and implementation activities have kept everything in place more or less nicely: Adding covering indexes to critical database tables, rewriting small as well as large chunks of business objects—and their underlying database structures—or swapping out complete workflows with their GUIs, even when they were referenced from other parts of the system, was "work to be done", never a "crisis".

But we do not have a valid software architecture documentation.

And we are not happy about it, because we would like to base some upcoming decisions on both
  • what actually goes on in our software (might it be that the application server actually calls, at some obscure place, some client-side software directly? or can we rely on the fact that this does not happen?)—what is commonly called the descriptive architecture
  • and on our rules of our software architecture (did we actually agree, 10 years ago, that we will forever only access a single database behind the application server?)—what is sometimes called the prescriptive architecture.
At one time, we had a project Wiki. It worked somewhat nicely for maybe 3 or 4 year of our 12 year product history, then one of the main contributors left the company, and the other one—me—got more productive, i.e., wrote more code. The Wiki, which was designed in a somewhat hierarchical fashion (it always had a large catalogue of "TBD" documents), was more and more referenced as "don't read it, because it's uncontrollably wrong as well as right". We tried to revive it half a year ago, with let's say small success, even though we still remain hopeful—but this is a separate story.

What should we do?

I started reading about software architecture documentation, and software architecture in general. And I found the problem. But the solution will be hard, and most probably require serious out-of-the-box thinking. I will tell you about the problem in the next postings.

Let me start with what I learnt from "the books"—two of them, actually, and a host of articles I found on the web.

The first book I started to read was "Essential Software Architecture" by Ian Gorton, which was published in 2006. It is a good book—read it (it is somewhere on the web, whether legally or not I do not know), and take everything away with you that you didn't know, or saw differently before.
  • I agree with everything it says in chapters 1 to 3.
  • Chapter 4, a guide to middleware technologies, is already outdated—not so much because the described technologies have evolved so much, but rather because they have been superseded, mostly by the cloud and the REST revolution.
  • With chapter 5, I disagree: This is a disguised waterfall process. Unfortunately, this is still the mainstream view in the software architecture literature, sometimes a little beautified by talk about agile cycles, but without real depth. Writers in the field will probably disagree. I will make my case in later postings.
  • Then comes chapter 6: "Documenting a Software Architecture". This would be it. But it wasn't. And then there is a "case study" in chapter 7: Interesting—we should expect that if we do as is done there, we would be perfect, or at least very good. I'll look at this shortly—but let me go quickly over the rest of the chapters, to find whether there is something interesting in there.
  • With Chapter 8, a second part is introduced which collects contributions for various topics from different authors. From my (probably limited) perspective, they are not relevant: Product lines in chapter 9 may be interesting, but are, in my humble opinion, just a special case of restructuring software in the long run. Aspect orientation, in chapter 10, has not fulfilled its promises—or it has been integrated in all sort of frameworks so seamlessly that we didn't notice that it has already arrived; anyway, it is, as of today, not relevant as a major architectural concept. Model driven architecture, in chapter 11, is equally absent from industrial practice. SOA, in chapter 12, is in some sense here: With web services, in the cloud, and then with micro-services, we have services everywhere. The "web service standards" shown on p.226, on the other hand, seem to lose ground: XML (still strong) declines against JSON; messaging and especially reliable messaging is nowhere seen, as are (distributed) transactions; on metadata, I do not dare say anything; but WSDL support for typical B2B services is sketchy at best. The semantic web, from chapter 13, has not arrived. And also chapter 14's software agents are a niche architectural style, at least in all IT systems I know.
  • The fascinating future of chapter 15, at least, has arrived: But, when viewed from a decade after the publication, more in the sense that on the one hand, many concrete technologies and the architectural patterns and styles they were based on or promoted, have been defeated by completely new ones, for which the old diagrams do not really work. But on the other hand, this fascinating future has, as far as I can see, not brought much progress in the wide-spread use of distinctly architectural techniques and methods in many many software projects ...
... which brings me back to my original question: Is there something wrong with software architecture—or with us? ... and to chapters 6 and 7 of Gorton's book, about documentation of an architecture, and a case study.

Let me tackle these in my next posting.

Saturday, April 22, 2017

Structure? Really?

This blog will, if I find time and motivation, explain what I think is a pragmatic approach to documenting and keeping software architecture for everyone—every developer, every software project (big and small), and of course every software architect (if you think that there are or should be such people as software architects).

The postings will be presented as a sequential, doubly-linked list that grows over time—most probably together with my understanding of what I really want to explain and say. But of course, it's a blog—so you can jump to whichever posting you like ...

It will take some time until I explain that—simple, and straightforward—approach. In the meantime, I will hopefully find time to look at various misconceptions in and around software architecture—misconceptions, of course, in my view. Others may, and will, think differently.

For example, in What is a software architecture? (from 2006), Peter Eeles claims:
If you were to ask anyone to describe "architecture" to you, nine times out of ten, they'll make some reference to structure.
and, therefore
It should not surprise you then that if you ask someone to describe the architecture of a software system [...], you'll probably be shown a diagram that shows the structural aspects of the system.
Now, I hasten to say that I find Peter Eeles's article very good as a very short, but still comprehensive introduction to the standard view on software architecture. Read it!

However, to return to the question of "structure," Peter Eeles's text does what unfortunately quite a few texts in software architecture do, namely to venture into non-software territory, and then haphazardly drawing shaky conclusions. "Structure," when used in civil engineering, is not that much related to some "structural aspect": Rather, it is simply another word for "object" or "building," in a general sense. So if people use the word "structure" when uttering their thoughts about architecture, this does not at all mean that a completely different architecture—namely that of software—should therefore or similarly or analogously be concerned with the structural aspects of a system.

Of course, software architecture is concerned with structural aspects of a software system. But this has not much to do with civil engineering and its architecture.

And on the whole, this issue is not really important on its own.

However, the same mistake—looking at other disciplines like architecture of buildings or electrical engineering; and then drawing conclusions from what they do—is also made in quite a few software architecture texts with respect to diagrams and drawings.
And there, the consequences are much more grave; my personal opinion is that we all have been brain-washed (ok: "brain-washed") to believe that software architecture inherently requires diagrams. Now, diagrams are useful—but not in the way they are useful in most other engineering disciplines; and, almost obviously if you look at them, not in the way they are mostly presented in software architecture texts.
To find out why, you should, at least as an experiment, try hard to document your software architecture once without diagrams. You'll learn something.

But this is another topic, for somewhen later.

But let me "start at the beginning", in the next posting: Why don't we have a useful software architecture documentation?