An Argument for the Isolation of “Fachlichkeit”
Yes, dear reader, this is a German word. I wasn’t able to find an English term that captures the exact meaning, so bear with me. And it wouldn’t be the first German word to make it into English either: Kindergarten, Schadenfreude, Doppelgänger, anyone? The exact pronunciation (‘faχlɪçkaɪ̯t) might be a bit of a challenge for an English-speaker, because the German “ch” sound doesn’t exist in English — and its two occurrences are even pronounced slightly differently (you can hear one of them here). A close approximation is “fahk-lick-kite”; if you are Howard Carpendale, you’ll pronounce the middle syllable as “lish” :-)
What is Fachlichkeit
Fachlichkeit, in the context of software systems, is the core of the functionality of a software system, as far as the domain is concerned. It is usually what makes the software system valuable to an organization. It embodies the collected expertise of a company; the stuff you don’t buy from consultants. Fachlichkeit is not necessarily contributed to a system by software engineers, but by experts in your domain. To illustrate, let me give you a couple of examples:
<![if !supportLists]>· <![endif]>In an system that creates monthly salary and wage statements for employees, the Fachlichkeit is all the rules that determine what counts as work time, as well as all the laws that drive the deductions and benefits that apply to the employees gross salary. Those are the expertise of employment law experts and tax lawyers.
<![if !supportLists]>· <![endif]>In a medical application that helps patients deal with the side effects of treatments, the Fachlichkeit is a set of data structures, algorithms, decisions procedures and correctness criteria. They are maintained by doctors and other healthcare professionals.
<![if !supportLists]>· <![endif]>In a tachograph, the device that monitors driving and break periods in trucks, the rules that govern when a driver has to take a break, and for how long, depending on the driving history over days and weeks, are the core Fachlichkeit of the system.
<![if !supportLists]>· <![endif]>In an observation planning system for a radio telescope, the Fachlichkeit is the parameters needed to perform a successful observation of a particular spot in the sky, including positioning, focus, filtering and image processing. These are specified by astronomers.
The Fachlichkeit really does not at all care about software concerns. Non-functional (aka operational) requirements such as scalability, security, or timing really are not relevant to the Fachlichkeit. Of course they are just as crucial for the final software system, but they are different concerns. I will return to this later.
Many established terms are close, but not identical. Functional requirementsare broader; for example, how a particular REST API is designed is a functional requirement, but it is not fachlich. Business logic is closely related, but a technical system such as the telescope control system mentioned above would never use this term. For our German readers, there’s a word Fachkonzept (no, this is not one I want to inject into English :-)), but it is more than the Fachlichkeit, because it usually also includes processes and organizational changes induced by a (to-be-developed) software system.
Why do we care?
We care because it is absolutely crucial to separate Fachlichkeit from the rest of a software system. There are two primary reasons for this. First, as I mentioned above, the experts on a business’s Fachlichkeit are usually not the software engineers. If you bury the Fachlichkeit in the software, expressed in a programming language, it is effectively inaccessible to the people who care most. They are forced to writing requirements documents. This, as we know, is problematic: they are usually not rigorous, so they can’t be checked by tools, they can’t be tested automatically, they cannot be executed. And they become stale as soon as the “real truth” in the software evolves. Requirements documents are like undead zombies: you can’t quite kill them because they are the best you have, but they also aren’t alive.
The other reason for separating Fachlichkeit from the implementation in software is that the two have a completely different lifecycle. How many times have you heard the story that a company had to “reverse-understand” and then reimplement the Fachlichkeit because they were forced to use a new platform? Mainframe to Java to web to mobile to what-is-next? At the same time the domain experts feel slowed down when they have to involve the software guys all the time when the Fachlichkeit inevitably changes. I would argue that the horror of legacy systems isn’t really the fact that you have to move from Cobol to whatever, but that you have to reverse-engineer all the Fachlichkeit that’s tangled up in the Cobol thicket in order to bury it again, this time in today’s favourite programming language. Let’s not allow it to get tangled up in the first place!
Software enginners very much value the notion of separation of concerns. We separate structure from layout in HTML/CSS, we separate transactions, security and scaling from the core behavior in JEE app servers, and in protocol stacks, we separate physical transport from logical request/response. As a community, we have invented all kinds of mechanisms to achieve separation of concerns, from layers to frameworks to aspect-oriented programming (another zombie). IMHO, Fachlichkeit is the most important concern to separate.
How do you do it?
From what I said above we can extract the following constraints for any approach to isolate Fachlichkeit: it can’t be code because that’s inaccessible to domain experts. And it can’t be just documents, because they are dead, they can’t be checked for consistency and they can’t be tested.
In my opinion the only way to approach is to create some kind of model of the Fachlichkeit. The model must be rigorous enough in order to avoid ambiguity. It must be formal enough so it can be checked for consistency by tools, at least to some degree. And it must be executable so that domain experts can write tests to verify that it works correctly. Finally, it must be represented in a way that is accessible to domain experts, for example, by using concepts and notations already established in the domain. If you have all these properties, the models are also suitable for code generation or for execution in an interpreter, which gives you the connection to the actual software implementation, in a way that is still decoupled: you just write a new generator or interpreter if your platform changes.
You will not be surprised that this sounds a lot like using domain-specific languages. However, if you agree to the premises of what I am writing here, I really don’t see another solution. I am genuinely curious whether you can suggest a different approach?
Just for the fun of it, let’s look at some candidates. Remember analysis models and business analysis and analysis patterns? They aim at some degree of rigor and structure, but they are not executable and testable. And UML, which is what was used primarily, isn’t good enough. Ever tried to express insurance contract calculations with UML diagrams? Doesn’t work.
Using controlled natural language for requirements? This can help. It makes requirements more precise, and removes ambiguity. But again: no execution, no testing, no code generation. So it falls short quite a bit. A related approach is to use math, logic, decision tables and other well-defined formalisms to describe behaviors; I’ve seen it for medical algorithms and nuclear reactor shutdown procedures. Good idea. They are potentially checkable and executable. But if you write them down in Word (as was the case in both examples), you get stuck at “potentially”. And if you back it up with a language definition and an IDE … well, then you’re effectively building a DSL.
Domain-Driven Design? Good and bad, in my opinion. The ubiquitous language is a good idea, it helps establish a common vocabulary in a domain. It can be a milestone on the way to a DSL. DDD also has several good architectural ideas, such as the anti-corruption layer. But in terms of separating Fachlichkeit from the rest, it doesn’t go far enough, because it still buries it in implementation code (admittedly not as deep as if you did “normal” programming).
What else except a DSL? I genuinely don’t know.
What else goes into those models?
It is not 100% true that it is only the pure Fachlichkeit that goes into these models. There are three very typical additional concerns that go there (with a few additional ones in particular systems). The first one is tests. You can only be sure of the correctness of your Fachlichkeit if you test it. In this respect it is not different from programming. Second, Fachlichkeit usually evolves over time. You have to manage this somehow. For example, you can use version control systems to track changes over time, or you can model temporality explicitly. For example, in the salary example above, the evolution of the law is represented in the models so you can re-run “old” calculations with the then-applicable laws. Third, variability should be captured. In the tachograph example, the rules that govern break times are similar, but not identical in different European countries. Expressing these differences in a way that lets domain experts keep the emerging complexity in check requires some thinking and has influence on the language that gets used.
There’s no free lunch, right? So what are the drawbacks of this approach? The obvious one is that you have to define a suitable language. This requires certain skills that might not be available in your organization. And it’s also just effort. Regarding skills, well, companies buy consulting services to acquire all kinds of skills, why not for this arguably central one? Regarding effort: the language definition comes in two parts. One is the systematic understanding of your domain so you can build the language. This part of the effort really isn’t a waste at all: it should be done in any case. The second part concerns the language implementation in a language workbench. Here, the effort is probably much lower than what you might expect by extrapolating from your compiler construction course at university, because modern tools have reduced the necessary efforts a lot! But yes, developing the language, and maintaining it over the years, is additional effort.
The other challenge is the cultural and organizational change that goes along with adopting the approach. The domain experts have to get used to increased structure and rigor. Even though good language design and good tool support can go a long way, this typically requires training. And this is a hard sell in most organizations these days, unfortunately. It can also be a change for the technical guys in a company, because of their now stricter focus on core technical issues, and their involvement in language and generator implementation. However, once people are over the change itself, the clearer focus is usually appreciated by both technical and domain people. And: while it sounds like I’m trying to tear the two groups further apart because of the clearer separation of responsibilities, it actually improves collaboration because it removes the inefficiencies and sometimes real conflicts around incomplete and inconsistent requirements documents.
So is this worth doing? It depends. It depends on how well your domain is suited for the approach, not all are. It depends how you value a clean, lasting and unambiguous specification of your core business expertise. Which in turn depends on how long you plan to be in business. It also depends on the complexity of what your system does: the more complex, the more evolution, the more variability, the more useful the approach becomes. My experience speaks a clear language though!