Whenever there's a task that needs to be done, we generally consider it a good thing if there's predefined, standardized way of performing that task: document templates, ways of writing requirements, how to express tests, how to perform quality assurance of software. And there are many benefits to this using a standard approach:
· it captures experience and best practices,
· it ensures that you don't forget to do (or write) important stuff; it can act as a completeness check,
· it makes it easier for others (colleagues, consumers) to comprehend and follow,
· it keeps your mind free to focus on substance, because the form is predefined.
We use standardised approaches in software development in many contexts: given-when-then tests, the pattern format, use cases, user stories, architecture document templates, Scrum ceremonies, standard operating procedures to ensure safety of the developed products, and so on.
There are also lots of examples outside the world of software where this is good. A great example are procedures and checklists in aviation. I currently train for my LAPL license to fly airplanes like this one. Running your preflight checks based on a (physical!) checklist really makes the whole enterprise safer. Using standardised engine settings in the traffic pattern makes the pattern easier to fly so you can concentrate on the more unique aspects of a particular situation. So: no general bashing of standardised approaches here.
But I have recently experienced the dark side of those. Let me list them first, and then I provide examples:
· people don't think about what is important to do/write, they just satisfy the standard
· people tend to use the standard in situations for which it isn't intended
· struggling to get the standard template "correct" blocks your brain from addressing substance
· there are often several competing "standards", and it's easy to get lost in religious wars about which one should be used
So let's look at a few examples. In one of my projects we are running a domain analysis to figure out how a part of the to-be-built system is supposed to work. There's a standard that requirements must be expressed as user stories. So instead of first trying to deeply figure out how something should work and writing this down as prose text and diagrams, everything is immediately shoehorned into "as an XYZ user..."-style stories. Deep analysis isn't happening.
A favourite of mine is the never-ending discussion whether things should follow the "user story" standard or the "use case" standard. Things are being rewritten, and in some cases kept in two versions because people can't agree which form is better. At the same time, as mentioned above, substance suffers.
A quick aside: in some sense it's good that these days the discussions are around use case vs. user story. When I was working for IBM around the year 2000, we were required to follow the Rational Unified Process. We had to complete over 40 different documents before we were supposed to write software. Of course we didn't ... most of them contains useless garbage to "check the box".
In another project there's a standard to use GWT-style tests. We have to express every test in this way. This leads to dozens and dozens of tests where the textual parts of the GWT template are identical, and only the data changes:
The textual GWT parts just have zero use. They just waste space. You could write the test simply as
In two other projects I recently had to read architecture documentation based on Arc42 and Views and Beyond, respectively. These are both highly structured document templates for describing architectures and the forces and rationales associated with the architectural decisions. In both cases, it was impossible for me as a consumer to understand the big picture, the "story" behind everything. The reason was that the big picture was ripped apart and distributed over all these sections. No forest, just lots of trees.
Let's move on to process. Another project is in its very early phases where lots of the programming work is explorative: nobody really knows how things should work, so we are building prototypes to explore options and help stakeholders decide based on working (prototypical) software. Nonetheless, we are forced by the Scrum master to estimate the tasks. It's completely impossible at this stage. Sure, we can timebox, but then we don't know whether we will finish.
In fact, the sprint planning sessions are also of very limited use, because even during one sprint the development team is often interrupted with repriorisations and completely new work. Yes, there's something wrong with the overall process outside the agile team, but everybody knows that, so why do the planning? We are also forced to plan out multiple sprints in the context of a SAFE product increment. That's obviously even more useless.
So why is this happening? I see three reasons. One is that there are people in the organisation whose job is to "enforce" that standard. They are not measured by whether the team(s) deliver high-quality software. They live for the standard, so they enforce it, useful or not. The second reason is that there are stakeholders -- often a little bit higher up the hierarchies -- who don't want to engage with the substance, so they try to do management-by-numbers and issue boards: "if you don't plan, I have no way of measuring what you deliver!!!" The third reason is that people think that following a standard magically produces substance. It can help -- see my checklist argument above -- but it doesn't happen automatically. And often the standard has the opposite effect.
So what to do about that? It's obvious, right? Take standards as a recommendation and then apply common sense. Adapt them to your context. So far so good. The real problem of course is dealing with those people whose job it is to enforce these standards or who are unwilling to engage with the substance and therefore rely on ceremony. That's an organisational issue that is hard to solve ... oh well, and so the craziness prevails.