Will AI Replace Models and DSLs?

 

Just as everybody else I have played a bit with CharGPT recently. Initially I was quite impressed. The first couple of questions I have asked were answered reasonably. I asked about some of my favourite bands, made it come up with some regexes I needed, had it explain some superficial stuff about aerospace and physics and even asked the machine to write a poem about flying. Everything was kinda correct and -- from my perspective -- quite impressive for a language model.

 

When I asked ChatGPT how to land an airplane in strong crosswind, I got suspicious: while it suggested using the aileron and rudder to "keep the airplane aligned" with the runway (duh!) it did not mention the specific techniques used in crosswind. Even multiple "why" and "how" prompts did not help. It became obvious that there's no semantic model behind the replies, only language statistics.

 

Other people pointed out that ChatGPT can't calculate, and that, when you ask about scientific papers, it often got only half of the citations correct and made the rest up -- all while sounding quite convincing and plausible. I tried myself: indeed, ChatGPT suggested that my book, DSL Engineering, was written by Jean Bezivin :-) There's certainly a degree of societal risk if machines are able to produce confident-sounding, superficially plausible falsehoods.

 

So why am I talking about this? Why does the guy feel compelled to talk about AI? Because I am often confronted with statements along the lines of, "hey, why do you continue working on formal models of things, won't we just be able to ask an AI in the future?". I don't think that will be the case.

 

Of course deep learning will continue to improve and will get better and better -- GPT4 is rumored to be "mind-bending". But I think it will not (soon, anyway) be able to be deterministically correct. It will perform great when you need a mostly correct answer in most cases -- just like humans, whom they strive to replace :-) But when you want to perform salary and tax calculations or describe a drug trial, you really want be correct, always.

 

"Come on, traditional software isn't always correct either.", you might reply. True. But at least you *can* use other means of verifying what it does beyond testing. You can review. You can analyse. You can potentially proof correctness -- the formal methods communite is also making great progress, it's just not as sexy as AI. Deep learning on the other side can't be subjected to these ways of building confidence. Expressed in the self-aware words of ChatGPT:

 

"[A] limitation is that deep learning algorithms can be complex and difficult to interpret, making it challenging to understand why they produce certain results. This can be problematic in industries where transparency and explainability are important, such as finance and healthcare."

 

Exactly.

 

So for situations where the 95%-most-of-the-time correctness is good enough, AI will replace lots of traditional approaches. However, in cases where higher levels of correctness are needed, it is important that review, analysis and proof are exploited effectively. Which brings me to ... surprise ... DSLs. You can use those to express complex subject matter precisely, and in a way that can be analysed effectively by (various degrees of) formal methods.

 

So here is my prognosis (entering the danger zone ... :-). While traditional programming won't ever go away, we will see an increasing split into those problems where AI is good enough (in terms of correctness) and those where we need (degrees of) guarantees of correctness. For those, formal models, plus user-friendly and domain-specific language to create them, will flourish.

 

WDYT?