For years, architectural principles have been hammered into us. Those principles have been implemented in many projects. So there is evidence whether these principles really help. Some ideas make sense at first glance and we have become accustomed to them, although they have not proven themselves. People just don’t like to talk about failures - and this also applies in those cases where principles failed to deliver.
Some examples:
Many projects aim for reuse of components. But I’ve never come across projects that benefited from reuse of existing components. So this goal seems to be hardly achievable. Also, reuse has its drawbacks, as I showed in a previous blog post. But because reuse is such a typical goal, it is difficult to say goodbye to it.
Ideally, data should be stored redundancy-free. Object-orientation has taught us object-oriented models. Thus, the domain is modelled in a single, redundancy-free object-oriented model with persistence in the database. Such models are often extremely complex and have hundreds of tables and columns in the database. The solution is domain-driven design’s bounded context. Instead of one model, the system uses several specialized models. Usually, the models are redundancy-free: the model for an order in the context of „payment“ uses completely different data than that in the context of „delivery“.
Ultimately, architectures often implicitly assume certain goals. An example for this is scalability. At first sight it seems obvious: a good architecture must provide scalability. A system that does not scale can be a major problem. On the other hand, lack of scalability is actually a great problem to have, because it means that a system is already successful and now needs to scale up. So you should determine the real architectural requirements before making any assumptions. If a system does not even go into production for reasons of compliance or security, scalability is of no use. Of course, the first rule for software architecture is: don’t play dumb. If scalability is really necessary, then you should implement it, and of course you should not voluntarily create bottlenecks.
The goal of technology independence usually leads to abstractions and indirections being built to be independent of a concrete implementation. These are often limited because they have to implement the lowest common denominator of all possible implementations. In addition, the concrete implementation might leak through the abstraction, so that real independence is not achieved. And finally, technology independence only pays off, when a new technology is actually needed. Until then, you have to „pay“ with increased complexity. This is not necessarily the easier way. It is often better to implement technology-dependent and make full use of the technology. Only when a different technology needs to be used, the system is migrated and the complexity cost is paid.
Another example is standardization and unification. This will certainly lead to an increase in efficiency, but if you look at the IT landscape of a company or even at the libraries used in a project, it is often rather chaotic. Of course, this is not optimal, but enough people have tried to solve this problem and have failed. You can add yourself to that group – or focus on standardizing only certain aspects, such as operation.
So, the only solution is to think for yourself and use those principles that actually solve a problem in your current project.
Many thanks to my colleagues Christoph Iserlohn, Lutz Hühnken, Christian Stettler, Stefan Tilkov and Benjamin Wolf for their comments on an earlier version of the article.
tl;dr
To make software architecture better, we have to say goodbye to what doesn’t work. But this is not easy, because some principles have become a tradition - which must be broken with.