
One of the characteristics of a chaotic world order, that is on one hand bedazzled by a new technology and on the other, deeply troubled by multiple wars and economic stress, is the disagreement of different markets about the state of the world. For example, the US stock market has recently hit new highs, but at the same time, many parts of the commodity complex portend an energy crisis (in Asia and Europe mostly) and a food crisis (in Asia and Africa).
Often these disagreements occur when sets of investors are laser focused on very specific asset classes, and perhaps do not talk to each other as much as they should. For a long time, this was a perennial problem between the equity and credit departments of investment banks. In periods of stress, credit analysts often had a much more negative view on individual companies than their equity analyst counterparts. I recall that during market crises (dot.com and global financial crisis), some banks brought credit and equity analysts together, so that the balance sheet and profit and loss statement could speak to each other, as it were.
This initiative came to mind last week when I read that in one part of the impressive, new JPMorgan headquarters in Manhattan market strategists were busy upgrading the bank’s 2026 forecast for the S&P 500 index to 7600 while very likely in another part of the JPMorgan building, lawyers, bankers, chief technology officers are scrambling to figure out the implications of Anthropic’s latest model Mythos for the banking system. Many of you will have read that upon the launch of the model, the Treasury Secretary summoned executives from the largest American banks to ponder how Mythos, if used the wrong way, could collapse the financial system.
Mythos is impressive, but also very scary to put it simply. In tests it has easily outperformed other Anthropic models and outperformed to the very upper limit of cybersecurity related tsests, detecting thousands of cyber vulnerabilities in software it tested. The problem is that the unconstrained Mythos can do the exact opposite, exploit bugs in software, and execute deadly attacks on IT systems.
Because of this Anthropic has limited the release of Mythos to twelve companies (‘Project Glasswing’), and the aim of this initiative is to find and repair deficiencies in software. The risk of course is that other platforms or states manage to develop models as powerful as Mythos (some Chinese models are said to be only seven months behind) and apply these to nefarious ends. Indeed, there are reports that three major Chinese AI firms have set up 24,000 accounts on Claude in order to try to hijack its capabilities.
The fact that the Treasury, and other bodies like the Bank of England, are scurrying to determine Mythos’ threat to financial systems is of great concern, and the advance of this model is now a national security issue across most developed nations. The risk of a large, near-autonomous attack on software infrastructure, financial systems and public institutions is now very real, and makes Robert Harris’ 2011 book ‘The Fear Index’ highly prescient (it’s one for the beach this summer).
The advent of Mythos also strikes a chord with a chart in a recent research paper from the Dallas Federal Reserve that illustrates the potential for AI to affect long-term economic growth. Economists normally deploy three types of forecasts – a baseline that tracks historic growth rates, an optimistic version a little above it, and a corresponding pessimistic one. In this paper, however, the optimistic scenario goes vertically upwards and the pessimistic one goes vertically downwards, capturing the potential of AI either to revolutionise the economy or to destroy humanity. In a later chapter I explore the economic and financial effects of AI in more depth, and how it may trigger or exacerbate a debt crisis.
More importantly, it brings into focus the idea of sovereign AI, the idea that governments control and need to control AI models and their data, energy and funding supply chains. Anthropic’s recent popularity is partly based on its independence from the US administration, but it is nonetheless in the American camp. China is catching up fast (DeepSeek is taking on new investors) but, apart from Mistral, Europe is lagging.
There are several things’ governments can and must do, and I am thinking of democracies like the UK, Japan, Canada and the EU. The first is state capitalism (a full description of which is outlined in a recent note from David Skilling), and this can involve sovereign wealth funds taking bigger stakes in AI model builders, the build out and reform of energy systems, and the further concentration of poles of AI excellence. Europe has been too slow here so far, and in my experience confuses the initiative towards state capitalism with a disinterest in incentivising private investment.
A further initiative will be the enhancement of national cyber security institutes in the context of greater collaboration between aligned nations and a rethinking of how critical infrastructure works, is powered and where its multiple vulnerabilities lie. The terrifying risk is that AI is developing so quickly that a small group of malicious individuals, aided by a small army of cyberagents, could disable a small country like Belgium.
When ChatGPT came to light in 2023, the key figures in the industry issued two letters (‘The Pause Letter’ in March 2023, and ‘The Extinction Letter’ in May 2023) that warned of the capabilities of AI and of its potential misuse. The letters were greeted with some scepticism, and in the giddy rush of AI stocks were quickly forgotten. It turns out that the founders may have been on to something.
Have a great week ahead,
Mike
