Imperial’s programming could go down as the most devastating software mistake of all time
May 16 2020
In the history of software mistakes, Mariner 1 was probably the most notorious. The unmanned spacecraft was destroyed seconds after launch from Cape Canaveral in 1962 when it veered dangerously off-course due to a line of dodgy code. But nobody died and the only hits were to Nasa’s budget and pride.
Imperial College’s modelling of non-pharmaceutical interventions for Covid-19 which helped persuade the UK and other countries to bring in draconian lockdowns could supersede the failed Venus space probe to go down in history as the most devastating software mistake of all time, in terms of economic costs and lives lost.
Since publication of Imperial’s microsimulation model, those of us with a professional and personal interest in software development have studied the code on which policymakers based their fateful decision to mothball our multitrillion-pound economy and plunge millions of people into poverty and hardship.
And we were profoundly disturbed at what we discovered. The model appears to be totally unreliable and you wouldn’t stake your life on it. First though, a few words on our credentials.
I am David Richards, the founder and chief executive of WANdisco, a global leader in Big Data software which is jointly headquartered in Silicon Valley and Sheffield.
My co-author is Dr Konstantin “Cos” Boudnik, the VP of architecture at WANdisco, and author of 17 US patents in distributed computing and a veteran developer of the Apache Hadoop framework that allows computers to solve problems using vast amounts of data.
Imperial’s model appears to be based on a programming language called Fortran, which was old news 20 years ago and – guess what? – was the code used for Mariner 1.
This outdated language contains inherent problems with its grammar and the way it assigns values, which can give way to multiple design flaws and numerical inaccuracies.
One file alone in the Imperial model contained 15,000 lines of code. Try unravelling that tangled, buggy mess, which looks more like a bowl of angel hair pasta than a finely tuned piece of programming. Industry best practice would have 500 separate files instead.
In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust. The approach ignores widely accepted computer science principles known as “separation of concerns”, which date back to the early Seventies and are essential to the design and architecture of successful software systems.
The principles guard against what developers call CACE: Changing Anything Changes Everything.
Without this separation, it is impossible to carry out rigorous testing of individual parts to ensure full working order of the whole.
Testing allows for guarantees. It is what you do on a conveyer belt in a car factory. Each and every component is tested for integrity in order to pass strict quality controls. Only then is the car deemed safe to go on the road.
As a result, Imperial’s model is vulnerable to producing wildly different and conflicting outputs based on the same initial set of parameters.
Run it on different computers and you would likely get different results. In other words, it is non-deterministic. As such, it is fundamentally unreliable. It screams the question as to why our government did not get a second opinion before swallowing Imperial’s prescription.
Ultimately, this is a computer science problem and where are the computer scientists in the room? Our leaders did not have the grounding in computer science to challenge the ideas and so were susceptible to the academics.
I suspect the Government saw what was happening in Italy with its overwhelmed hospitals and panicked. It chose a blunt instrument instead of a scalpel and now there is going to be a huge strain on society.
Defenders of the Imperial model argue that because the problem – a global pandemic – is dynamic, then the solution should share the same stochastic, non-deterministic quality. We disagree.
Models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters. Otherwise, there is simply no way of knowing whether they will be reliable.
Indeed, many global industries successfully use deterministic models that factor in randomness.
No surgeon would put a pacemaker into a cardiac patient knowing it was based on such an arguably unpredictable approach for fear of jeopardising the Hippocratic oath.
Why on earth would the Government place its trust in the same when the entire wellbeing of our nation is at stake?Read more