The Swiss government’s Spiez Laboratory, one of whose specialisations is the study of deadly toxins and infectious diseases, is located right in the heart of Switzerland, incidentally not too far away from the Reichenbach Falls, where Sherlock Holmes vanquished Professor Moriarty (more about him later) in ‘The Final Problem’.
Nine months ago, scientists at the Lab performed an experiment where they deployed their artificial intelligence driven drug discovery platform called MegaSyn to investigate how it might perform if it were untethered from its usual parameters. Like many AI platforms MegaSyn relies on a large database (in this case public databases of molecular structures and related bioactivity data) which it ordinarily uses to learn how to fasten together new molecular combinations to accelerate drug discovery. The rationale is that MegaSyn can avoid toxicity in molecules, and thus sift ‘good’ ones.
In the Spiez experiment MegaSyn was left unconstrained by the need to produce good outcomes, and having run overnight, produced nearly 40,000 designs of potentially lethal bioweapon standard combinations (some as deadly as VX). It is an excellent example of machines, unconstrained by morality (humans have willingly crossed this moral threshold), producing very negative outcomes.
Another recent example is the reported conversation between Blake Lemoine, a Google employee, and a computer program called LaMDA which Lemoine reported publicly as being sentient. Whether this is true or not, we are at a stage where AI is advancing towards AGI or Artificial General Intelligence, where computers can learn and begin to think like humans, not unlike Alan Turing’s famous ‘test’. Indeed, an AI program called GPT-3 can write half decent fiction.
Scarily still, there is already plenty of evidence to suggest that AI is playing a military role. In Ukraine, drones have been programmed to recognise Russian military equipment and to attack it. Larger nations can harness AI to weapon systems to make them seek and destroy their enemy and having seen the effect that drone technology has had in the Nagorno-Karabakh war, we may not be far from an AI driven war.
This example and the broader emerging debate around AI give us a sense that in the new world order that is being formed, there are multiple, complex axes. For example, much is made of the growing strategic rivalry between the USA and China, and part of this rivalry will surely focus on AI – in terms of computing power and access to large public and private data sets (Europe is ahead of both the US in seeking to rein in how data is used in AI). Within these large regions, another line of tension will run between humans and the impact that AI has on their lives (such as on minorities).
It is, however, not all negative. In a widely reported experiment last week, a project called Democratic AI allocated the outcomes from an investment game, in a way that was more egalitarian than the outcomes chosen by purely human actors. It suggests that whilst the research benefits (and dangers) of AI in settings like biotechnology are more tangible, there are also very clear policy outcomes (for democracy and public policy) as well.
At this stage, it is not controversial to say that most governments are far behind where they need to be in understanding and better marshalling the effects of AI on our lives (from insurance contracts to airline prices to the interaction between social media and politics). While I can’t claim to have a clear insight myself, I can recommend a few decent resources – the State of AI report, Kai-Fu Lee and Quifan Chen’s book ‘AI 2041’ not to mention the entertaining ‘the Love Makers’ by Aifric Campbell.
Now back to Moriarty, another man with dark dreams of being ‘world king’. Rumour has it that one of the people that inspired Arthur Conan Doyle’s characterisation of Moriarty was George Boole, Professor of Maths at University College Cork from 1849. Boole, one of the great mathematicians, created Boolean algebra which laid the foundations for computer language and is this the structure around which scientists use machines to mimic and ‘improve’ on human behaviour.
I spent years in the basement of UCC’s Boole Library, slaving away on AI – though I didn’t know it at the time. The trouble is that back then it was called regression analysis, data sets were very, very limited and computing power was, from today’s perspective, prehistoric (see my account).
If I had known that the regression caterpillar would turn into an AI butterfly I might have stuck with it. My lesson is that computing power, and in certain cases data sets, will improve further, and as they do, they will push the boundaries of law, moral philosophy and strategic competition between the large regions.
Time to bring back Sherlock!
Have a great week ahead,