Do we over-generalise?
We are all, from time to time, guilty of over-generalisation. Arguably this is not a terrible sin because unless, from time to time, we go too far we might never know whether we have gone far enough! But, if not curtailed, it is likely to result in the symptoms described by Kevlin Henney in one of his contributions, “Simplicity Before Generality, Use Before Reuse”, to the book “97 Things Every Software Architect Should Know”. Another cause of over-generalisation is perhaps that we feel that we are on safer ground when arguing from the general to the specific, rather than vice versa; so we try to be general enough in (frequently, incorrect) anticipation of the next raft of specific cases.
However, thinking about this issue reminded me of a couple of relevant points (and perhaps something along these lines would have been included in a longer discussion than the current book could accommodate).
Keep it simple?
Firstly, on “simplicity”: there is Einstein’s well known point about making things “as simple as possible, but no simpler”. When dealing with any situation we are frequently inclined to oversimplify, especially in the early stages of dealing with it, when we have not yet understood all of the use cases. So, a solution which is general enough is likely to be more complex than our initial inadequate solution. We are able to simplify the solution, without loosing its generality, as we gain experience of the issues by continually investigating alternatives and discovering more elegant approaches. However, as Einstein implies, selecting a solution which is too simple is actually likely to end up being either inadequate (not general enough) or more complex than it need by, as various work-arounds have to be added to handle cases which the solution does not naturally include.
Taking the conventional approach to improving our designs that are general enough to cover the required set of cases, the inevitable consequence is that they approach their eventual state from the more complex side and become more simple, rather than the other way around. This can occur broadly in the two ways implied above: either the fundamental design is overly complex and becomes simpler as unnecessary elements are removed; or the fundamental design is overly simple, requiring work-arounds to cope with various cases, leading to an unnecessarily complex result which becomes simpler as the capabilities of the fundamental design are expanded and the work-arounds can be removed.
How to be general?
Secondly, on “generality”: many discussions of generality tend to deal with it as a high level issue; it is discussed in terms of widening the scope of coverage of the subject domain. Now, ultimately a more general approach does indeed cover a wider scope, but it is my experience that this is not really achieved by expanding the scope in some kind of “top down” sense. Rather, the increased generality comes about at the lowest level, by increasing our understanding of the low level issues and sometimes even lowering the level of the fundamentals on which the solution is based. The simplicity of the foundations of the solution frequently limit the extent of the cases which it can support. Dramatic increases in the generality of the solution can be achieved: by deepening the foundations; by correcting and removing limiting assumptions; and, in some cases, by making the foundations more complex.
It is true that we often begin with some requirements for our solution and, from a project management perspective, that these direct and motivate the work. However, in terms of the capabilities of any designed solution, these requirements are the desired outcome, not the input. The generality of the solution and whether it meets the requirements are the effect, not the cause, of the design decisions that are made along the way. Only by the application of feedback to correct mismatches between the actual outcome and the desired outcome, do these influence the design. (This dichotomy is perhaps not disconnected from some of the problems with software project management today … but that is a much bigger topic, for many other days!)
None of this contradicts the contribution to this book, but it does suggest that perhaps the described (meta)model might itself be revisited!
The description in the book, as I understand it, is treating “simple” and “general” as being opposite characterisations of a property of a design (whether architectural or not). On this basis, it is encouraging people to choose simple solutions rather than potentially over-generalized ones; I agree with this, in spirit.
However, on the basis of the points above, is it not more likely that these two properties are orthogonal? We might think of the “simplicity” property as ranging from “simple” to “complex” and the “generality” property as ranging from “specific” to “general”. On this basis, we can consider: “simplicity” to be an input, a property of the design choices that we make; and “generality” to be an output, a property of the capabilities of the design. Our aim in most cases is, presumably, to make the solution: as general as necessary, but no more general; and, as simple as possible but no simpler. Non-optimal solutions might miss the generality mark in either direction but, for a given generality, can only miss the simplicity mark on the complex side.
On this basis, my interpretation of Kevlin’s advice is: initially, aim to apply simpler (less complex) solutions to more specific (less general) requirements; and, subsequently, increase the complexity as necessary to satisfy more general requirements. This sounds like the opposite of the more conventional advice which is to design something which satisfies all of the requirements, and then gradually to remove complexity until no more can be removed without reducing the generality.