Last week the excellent MIT Tech Review came through the letterbox – it is reassuring that in a hyper tech age some people still have the grace to publish printed material. Part of the Review was devoted to a listing of the top 35 innovators under 35, about which two things struck me. The first is that 23 of the 35 innovators were born outside the US, a fact that should temper the belligerence of the ‘send them home’ crowd.
The second is that at least one fifth of the innovators are working with Artificial Intelligence (AI). It reflects a broader trend. Whilst these scientists and entrepreneurs are at the forefront of AI science, I have a growing feeling that like blockchain two years ago and the ‘dot.com’ mania some twenty years ago, many entrepreneurs are injecting the term ‘AI’ into their business descriptions in order to boost their profiles.
If anything, this proves that humans will abuse and manipulate technology for their own ends, rather than the other way around. Indeed, I recently read a research paper in ‘Science’ magazine that described how false (bad) news diffused more quickly than good news, though humans rather than robots were responsible for this effect.
A cynic might describe AI as a liberated regression equation set loose on a beefed up data set. Thankfully, far cleverer minds than mine have focused on this area and two excellent books I can recommend are Shoshana Zuboff’s ‘The Age of Surveillance Capitalism’ and Kai-Fu Lee’s ‘Artificial Intelligence’ are the best ones I have come across.
One of the tricks I employ in ‘The Levelling’ is to resurrect Alexander Hamilton and ask him what advice he would give the great powers as they march into the twenty first century. With the US in mind he reminds the US that must often new technologies develop quickly, too quickly for legal, philosophical and regulatory frameworks to keep up with them. Moreover, he states that those countries that can set the standards and rules of the game around new technologies will largely control the development of those technologies.
In this respect AI is interesting in the sense that the technological know-how behind it is evenly spread between the US, China and even Europe. Yet China is far ahead in terms of the data pools that can be employed by AI technologies, and is far more permissive in terms of the applications of AI and AI datasets in socio-political settings. Partly as a response, there is now a project at Harvard that aims to educate US and Chinese data scientists on the ‘social’ dangers associated with the use of AI by governments.
Of course the USD 5bn fine that the US Federal Trade Commission has hit Facebook with is an even grander signal that regulators are taking bolder action, though of course the fact that it took Netflix less time to produce a film based on the Cambridge Analytica debacle shows that the regulatory pace may need to be increased.
Thankfully, the OECD has now stepped into this space and in a recent, detailed book (‘Artificial Intelligence in Society’) they lay out five Principles on Artificial Intelligence. I don’t at all do justice to the fine work of the OECD here but essentially the principles state that AI should ‘do no harm’ – respecting humans and the rule of law, benefit the planet and value transparency and accountability in its use. It is a useful framework but to my earlier point, I suspect that the appetite to enforce this will vary greatly across regions.
The OECD book is also useful in underlining the extent to which capital is now attracted to AI – for example AI start-ups attracted 12% of global private equity investments in the first half of 2018. This not surprising given the potential of AI applications such as transport, healthcare and medical diagnosis and financial services.
This trend will likely continue and may well exacerbate some of the odder macro trends we have seen during this business cycle – lower spend on capital expenditure (because more capital lean business models), the dampening of inflation, the disruption of labour markets and an ambiguous impact on productivity. As it grows, the effect of AI on economies may for example make the job of central bankers more difficult. But, like medicine, it might be that artificial intelligence could prove more accurate than humans in diagnosing what ails our economies. Imagine if Christine Lagarde was the last human to chair the ECB, and her successor was a machine, sitting high in the ECB tower in Frankfurt.
Have a great week ahead,
One thought on “AI should do no harm”