A quick heads-up on
the Atlantic article
The challenges of software
development are hitting the mainstream! A long, detailed article was recently published in The Atlantic; it is maybe
a bit slow-paced for my taste, but I still recommend reading it. I would
summarise it as follows:
Software is getting more and more complex
(while at the same time being responsible for more and more critical aspects of
systems and society). The methods used by developers in industry are no longer
able to deal with the complexity, leading to potentially really dangerous
failures. Developers need to change the way they write software. Two basic
paths forward exist: the first one is making (the representations of) programs
less complex and more easily understandable, effectively reducing their
accidental complexity. Models, DSLs, and code generation are the useful here,
as are ideas from live programming that reduce the gap between the program code
and its execution through simulation, realtime feedback and visualisations. The
second one, for the essential complexity that cannot be reduced with the first,
formal methods, such as model checking, SMT solving and proof assistants should
be more heavily used.
I agree with the general
premise of the article, both the diagnosis of increasing complexity and the
approaches to address them. Models and DSLs are my bread-and-butter work. The
synergies with formal methods are a very interesting field which we have explored over the last few years. Realtime feedback,
visualisations and simulations have “saved the day” in several of our DSL
projects, and we are putting more and more work into this aspect of languages
and tools. For example, in KernelF,
our embeddable functional language, we have an in-IDE interpreter to run tests,
we overlay execution values over the source in a tracer/debugger, and we are
working on a reactive framework for incrementally updating computed values in
the IDE.
There
are a few places where I
would be a bit more differentiated. For example, the promises of live
programming, inspired by Bret Victorsvideo, haven’t quite panned out. All his examples,
as well as the examples mentioned in the Atlantic article (image processing,
web pages, animations, visualisations) are systems where the output is easily
representable graphically. How would you do this for an airline reservation
system? Or a medical diagnostics app? Those systems have such complex behaviors
that you cannot “show” them. You have to define (all!) scenarios and illustrate
those. This approach is called simulation, and is not really a new idea (though
it is underused in the software space; we should look at systems engineers for
inspiration). And yes, overlaying program values over the code and letting the
user move back and forth in time is cool and useful, but it’s far from Victor’s
vision.
Regarding formal methods,
yes, I agree, it must play a bigger role in the future. However, as Benjamin Pierce explained
in this omega tau episode, the effort to fully verify programs
is still huge. And it is very much expert work, not easily accessible to
mainstream developers (including yours truly!). This should not be an excuse
for not at least using formal methods for the low-hanging fruits, for trying to educate future developers
in formal methods, and for spending the effort on infrastructure components
(such as operating systems, network stacks or web servers) to at least make
those platforms more reliable. It will come, but it will take a bit longer than
we might wish. In the meantime, by the way, it would be useful to at least test
software systematically!
Even for the DSL and
modelling stuff, we notice how hard it is to change the established habits of
programmers. We have run many DSL projects where all the stakeholders involved
in an initial proof-of-concept concluded that this is the way to the future,
only to notice that the organisation as a whole is not willing to make the
necessary process changes and supply the required education and training.
A final point I disagree with
is this sentence:
The serious problems that have happened with
software have to do with requirements, not coding errors.
While it is correct that a
system that does the wrong thing is a problem (they provide some examples of
well-known system failures in the article), coding errors, as they call them,
are still serious issues. Many of the well-known security exploits are a result
of low-level errors, often symbiotically related to quirks/design flaws of the
programming language that is employed. It is completely mysterious to me why we
still write many safety-critical systems in C, a language that is famous for
its pitfalls.
In some sense, the summary of the
summary would be that we as developers should rely more on computers and tools
in the process of software development. Except for compilers, we tend to use
tools for ancillary tasks such as build, packaging and executing tests or to
help us get the structure of programs correct (IDEs, editors). The core
activity of programming, and in particular, understanding of what the programs
do, is still mostly happening in developers’ brains. This is kinda funny,
because as software developers we often write tools that help other domains
become more efficient (writers, presenters, technical designers, systems
engineers). In this sense, we should focus more on ourselves.