What Will 2026 Bring?

It’s that time of year when investors and economists release their prognostics for the year ahead, and eclectic and contrarian as we like to be, The Levelling brings you its top ten themes for 2026, with apologies for the length of the note – in fact this week we are simply giving you the first five themes, with the others to follow next week. It’s really one to print out and read with a coffee, or even a stiff drink.

Given the approach of the holidays, we have also added in some pertinent film and book recommendations.

Some of the ten themes we flag here are based on observations we have made during the year, and relate to trends that are now becoming clearer, chief amongst them is the imprint of AI on economies, geopolitics, and society.

We hesitate to make outright forecasts for GDP and rates for two reasons – first we expect growth to rise modestly during the year (though this is very much dependent on the capex cycle) and second, most of the interesting developments will take place at the sector level.

#1 RAIlway Boom

In the late 1990’s as the dot.com bubble built, there was a polite debate amongst central bankers as to whether or not an asset price bubble was present in stock markets, most notably in dot.com related companies. The upshot of the debate was that even if the central bank could identify a bubble, there wasn’t much it could do to puncture the bubble (notwithstanding Alan Greenspan’s ‘irrational exuberance’ moment).

Today, central banking has changed, and so too have asset bubbles. There is a very broad narrative – from investors and economists – that we are indeed in a ‘bubble’, the only question is whether markets are in the foothills or the peak of the bubble. My sense is more ‘foothills’ than peak, largely because we are not yet seeing the folly and exuberant behaviour that was present in 2000 (I will share some stories in a future note).

Of course, the obvious danger of such a narrative is that for some but not all investors, it permits the belief that investors can continue to buy very expensive assets and later hand them off to ‘greater fools’, and the illusion that ultimately they are not the fools.

Every asset bubble needs an underlying logic, a belief that ‘this time its different’ and this is supplied in spades by the adoption and investment in Artificial Intelligence (AI). Signs that companies and households are deploying AI are manifold. This bubble is also different in the case that AI is producing revenues, as evidenced in the operating and market performance of large AI centric firms (the so called ‘Magnificent Seven’ companies who together now make up nearly 40% of the US stock market capitalization), but those earnings are predicated on the success of the AI business model and are increasingly circular, in that investment by META becomes revenue for Nvidia and so on.

What is altogether less clear to me is how the economics of AI play out. While the adoption of AI is occurring more quickly than other technologies (the internet), competition will surely lower margins quickly. Chinese projects are a case in point, and some of the large US AI platforms, of which OpenAI is the leader, may find their economic models undercut.

Neither is the distribution of the productivity benefits that convincing – specialized firms and operators with access to proprietary data will be able to leverage AI to great benefit, along the lines of my ‘One Man and his Dog’ thesis. However, for most people, once some basic administrative tasks have been swallowed by AI applications, the positive economic impact on their lives might be more limited. Another consideration is that AI model technology is in the hands of a small number of investors, so the capital productivity benefits of it can also be limited.

The Future: The AI boom or bubble is gathering momentum. Levels of capital investment (relative to GDP) are already surpassing those of prior bubbles, but have not yet attained the giddy heights reached during the railway bubble of the 1900’s. The railway bubble was one of the great asset bubbles – and helped build the crucial infrastructure of the first wave of globalization. In 1900, investment in railway infrastructure amounted to 6% of GDP, AI today is just over 1.3%. Also, at the turn of the 19th century nearly 60% of the market capitalization of the US stock market was made up of railway stocks (today it is 0.3%) which as a rule of thumb suggests we might see talk of a USD 10 trn valuation for Nvidia and SPX 10,000 ((the US S&P500 index hitting 10,000 points) as a ‘sell everything’ moment.

Read: Charles Kindleberger’s ‘Mania’s, Panics and Crashes’

#2 ‘Dalloway’

One of the more memorable films I saw in 2025 is Dalloway, a French film starring the ever-excellent Cecile de France, which I hope will make its way to the Anglophone world. The object of the film is to show how pervasive and sometimes pernicious AI could become as a social force, and as we head into 2026, this is a theme that will become more important – in healthcare, labour markets and society – and more startlingly obvious.

To start with an alarming example, in 2021 the Swiss government’s Spiez Laboratory, one of whose specialisations is the study of deadly toxins and infectious diseases, is located right in the heart of Switzerland, performed an experiment where they deployed their artificial intelligence driven drug discovery platform called MegaSyn to investigate how it might perform if it were untethered from its usual parameters.

Like many AI platforms MegaSyn relies on a large database (in this case public databases of molecular structures and related bioactivity data) which it ordinarily uses to learn how to fasten together new molecular combinations to accelerate drug discovery. The rationale is that MegaSyn can avoid toxicity.  In the Spiez experiment MegaSyn was left unconstrained by the need to produce good outcomes, and having run overnight, produced nearly 40,000 designs of potentially lethal bioweapon standard combinations (some as deadly as VX). It is an excellent example of machines, unconstrained by morality, producing very negative outcomes. It’s a chilling tale of the tail risks of AI.

More commonly, AI will increasingly become part of our economic and social lives, and its effects will be more apparent.

In labour markets, there is already plenty of evidence to suggest that AI is curtailing hiring, markedly so in the case of graduates. When AI and robotics start to combine, they can have very positive outcomes (in education and elderly care) but in warfare (see the Netflix documentary ‘Unknown Killer Robots’), fruit picking, warehouse management and even construction – to give a few examples, the blue collar labour force will feel the effect. This could set up a political reaction, and we might well see a Truth Social post from the White House to the effect that AI is not such a great idea and needs to be regulated.

A potential side-effect of the more negative effects of AI on the labour market could be a rise in anxiety and what social scientists call ‘anomie’. Much the same is becoming clear from the ways in which social media is skewing the sociability of humans (think of declining fertility rates, pub closures and the mental health effects of social media). As such, the social effects of AI may lead to ‘deaths of despair’. If this is grim, there is potentially very positive news in the use of AI to improve medical diagnoses in inexpensive ways, and the marginal impact of this in emerging countries can potentially be very significant (leading AI firm Anthropic is targeting science and healthcare in terms of applied AI solutions).

The Future: The economic and social side-effects of AI will become clearer – many of them will be positive, but others will start to provoke a political reaction. While the EU has softened some of the restrictions in the EU AI Act, the interesting development is that at the state level in the US there is a growing desire to curb some of the effects of AI, a trend that is supported by case law. Moreover, local politicians in the US (Republican Josh Hawley is an example) are more vocal about the negative side-effects of AI on labour markets and education.

Read: Carl Benedikt Frey ‘The technology Trap’(2019), and Robert Harris’ ‘The Fear Index’ (2011)

Watch:Dalloway’ (1997)

#3 AI Cold War

A further facet of AI to keep an eye on is geopolitics, and as we leave 2025 behind, we will hear more about the notion of an AI Cold War or ‘Sovereign AI’ according to a good Pitchbook note. This emerging idea refers to the strategic uses of AI, in the context of strategic competition between the ‘great’ powers. This race is already on, and the US is in the lead, with China chasing behind (my recent note on The Plenum details how China is prioritizing frontier technologies as the spearhead of its economic plan). Europe is very much in third place, with energy policy and half formed capital markets the biggest obstacle. 

In a ‘Cold War’ AI world, model development and deployment  increasingly take a multipolar form (see #8 below), regulation is competitive and technology firms  closely align with governments – forming symbiotic parts of national infrastructure – while national security considerations are embedded into investment processes and supply chain planning. In time, governments may steer model developers towards new datasets if there is a strategic advantage to be gained.

The Future: From an investment point of view, we expect private equity/credit to become an enabler of this trend, and for their part governments will open up the flow of pension capital to private asset classes. Governments may also become more active investors – either in steering merger and consolidation activity, or in the fashion of the Trump administration, taking stakes in firms that are judged to be strategic. Military uses of AI will become more commonplace, and we will slowly learn more about the effects of this on navigation systems, genetics, finance and social media, to name a few.

Read: ‘Breakneck’, by Dan Wang (2025), ‘Chip Wars’ by Chris Miller (2022)

Watch: Dr Strangelove (1964)

#4 Expensives to Defensives 

An age-old joke goes that when asking for directions, the traveller is told ‘I wouldn’t start from here’. It is much the same for investors looking into 2026, though less so tactical traders who believe that they can time the ebbs and flows of the emerging stock market bubble.

The dilemma for asset allocators is that with the US stock market making up some 60% plus (depending on the benchmark) of world market capitalization, and trading at near record valuation multiples (price to earnings or price to long term earnings (Shiller PE), or even market capitalization to GDP (Buffet Indicator), the exposure to American assets is increasingly expensive and risky.

For example, a model that combines monetary, business cycle and market valuation indicators, suggests that from this point onwards, returns in the next couple of years for US equities will be close to zero. Add to that the fact that the dollar still looks expensive and corporate bond (and high yield) spreads are very narrow, and the conundrum for allocators next year will be considerable.

As we end the year, volumes have been very low and speculative activity (options) very high, and this points to high levels of volatility through 2026, and remarkably, a few of the large bank CEOs have warned of significant market drawdowns.

The Future: We expect to see investors put more money to work in cheaper defensive sectors – Staples and Healthcare for example, and for capital to flow to other regions beyond the US. In addition, in the next five years, if multiple surveys of family offices and pensions are to be taken at face value, we expect private assets to make up a much more significant proportion of investment portfolios.

Read: Benjamin Graham ‘The intelligent Investor’ (1949)

Watch: Margin Call’(2011), ‘The Big Short’(2015)

#5 K Shaped economy

In the context of a political-economic climate in the US where good, regular economic data is hard to come by, commentary from industry leaders as they report earnings is providing some fascinating insights. For example, some weeks ago, Chipotle, the burrito chain, reported a surprise drop in revenues because two key consumer groups, households earning USD 100k or less, and younger customers (24-35 years old) are cutting back discretionary spending, even on fast food.

A range of firms with similar client bases underline this trend – car manufacturers report that sales of expensive, large vehicles are strong, but that lower income customers are preferring smaller, fuel-efficient models. McDonalds is revising its ‘extra value meal’ option, and credit card providers like Amex report very different types of activity from rising card balances and distress in the lower segments, to robust spending in its ‘Platinum’ category.

Economists are blithely referring to this phenomenon as the ‘K-shaped’ economy, whistling past the graveyard of economic history that portends revolutions are made of such obvious divergences in fortune.

Now all of the talk is of a K shaped economy – which refers to multiple divergences between the price insensitive wealthy and those in economic precarity who are sensitive to inflation, a services sector that is either shedding jobs and holding back from hiring compared to the upper echelons of the technology and finance industries where unprecedented levels of wealth are being created.

There are two other effects ongoing. The first is the economic effect of AI-focused capital expenditure (across the energy, logistics and technology sectors). The second, more important trend is a mangling of business cycles, such that few of them are synchronized across geographies, or between the real and financial economies (German chemicals is in the doldrums but German finance is on an upswing).

Yet, a better diagnosis might be the ‘Marxist’ economy – one where the owners of capital and the source of labour are at odds.

The Future: In the US, the top 10% of the population own 87% of stocks and 84% of private businesses, according to data from the Federal Reserve. On the other hand, we have previously written about the rise of economic precarity in The Road to Serfdom. So, whilst it is a new observation amongst the commentariat, the diverging fortunes of capital and labour should start to trouble policymakers in 2026. Expect this to be a headline policy issue net year – the White House is already paring back some tariffs, and in Europe governments compete to either tax the wealthy (France and the UK) or to lure them (Italy).

Read: the NBER Business Cycle website

Watch: Falling Down (1993)

ChatCCP

Technology happens quickly. This time two years ago, few people had heard about ChatGPT, and few investors knew what Nvidia did. On January 6th 2023, I wrote a note entitled ‘Talos’ where I remarked …

’A recent development here is the arrival of ChatGPT an interactive ‘intelligent’ bot that has been developed by OpenAI (set up seven years ago to build socially constructive AI, and recently valued at USD 30bn). ChatGPT is catching on quickly, not least because students have found that it can write half decent essays. I recently tested it out, asking for a response to the question ‘Is globalization over?’- the result is below, and in my humble opinion is a good rendition of the kind of response that a ‘two handed economist’ might give (‘on one hand…on the other’). I think I can just about do better, and if there is any lesson to draw it is for human writers to be more opinionated, quirky or style driven in how they write. I am not out of a job just yet.

I am still writing, and don’t use ChatGPT to do so. In January 2023, my presumption was that because ChatGPT had already been in place for some time, investors knew about it and had factored it into share prices (the theory of efficient markets in finance says that all publicly available information is quickly reflected in market prices).

This was not the case, and with hindsight, I realise that students of economic and finance of the 1990’s spent far too much time imbibing the ‘efficient markets hypothesis’, which was a big thing at the time and the big guns of academic finance raged about it into the 2000’s. I suspect that more people today believe markets are rigged than efficient, with the rigging done by vampire squid type funds and central banks, to name a few culprits.

The performance of AI centric stocks (notably Nvidia) shows that the early promise of AI was underestimated, perhaps to the extent to which it is overestimated today. Indeed, the concentration of the top ten companies in the US stock market (they make up 40% of the market capitalisation) is as high as it only has been in 1930. In that context, the apparent supplanting of ChatGPT by DeepSeek (we might call it ‘Chatccp’) in the Apple app store is a reality check.

It is also an efficient markets check. I had seen the initial model test results from DeepSeek at the start of this year (Azeem Azhar’s blog had mentioned it back in November, and also as far back as December 2023). To that end the technology world has been well aware of DeepSeek for months, so it is a surprise, and another slap in the face for efficient markets hypothesis (I am still a little obsessed about this). 

Whatever about efficient markets, the advent of DeepSeek is a reminder of how the cycle of innovation in economies works, and will have some commentators rushing for their Schumpeter. The Austrian economist coined the term ‘creative destruction’, but his views on capitalism are worth re-reading for he believed that it would undermine itself (different cohorts or corporates would effectively block the system) and collapse. His work on business cycles is also well overdue a comeback.

We don’t quite know whether DeepSeek (who claim to have produced superior model results compared to ChatGPT and Claude AI for a fraction of the cost) have been helped by the Chinese state, whether any espionage is involved nor the extent to which they have piggybacked on work already done by US based teams. Nor is it too important that the bandwidth of the model is restricted in China and that it is unlikely to be widely used in the West.

The important development is that the cost of production threshold for large AI models has been set much lower. Other cheaper models are on the way, Bytedance’s Doubao-1.5 or Moonshot’s Kimi k1.5 are two emerging examples.

To that end, the first phase in the AI boom now comes to an end. Much like other key technology infrastructure breakthroughs – railway (railway companies made up 50% of the stock market in 1900), automobiles (France dominated the auto market from 1903 to 1929, at times manufacturing half of the world’s cars) to the internet (anyone remember AOL or Netscape”) – the initial innovation garners huge amounts of capital, is expected to change the world, which it does but often not in the ways investors expect. The legacy is usually a new web of infrastructure.

In the next phase of AI, Investors will now not likely pay up for model developers but will focus on unique datasets, applications of AI (health) and the AI industrial supply chain – energy for AI for example (see last week’s note ‘Humphrey’).

The view of the stock market, which quickly rebounded, is that the consumer is the winner – that DeepSeek opens up the prospect of cheaper, better AI for common use, and I think the investor conversation will turn to how this impacts consumption patterns and the way people work (from police officers to teachers). The fact that the stock market took the positive view of the DeepSeek news suggests that the AI bubble in stocks is alive and well.

Elsewhere in AI, model development is accelerating in a sinister way. In a paper entitled ‘Frontier AI systems have surpassed the self-replicating red line’, scientists at Fudan university highlight how AI can build replicates of itself, and when this process runs into obstacles, demonstrate a survival instinct (such as rebooting hardware to fix errors). It strikes me that this is the real AI development we need to pay attention to.

Moriarty

Monday morning started with an early sprint from Baker St to Marylebone train station in London, passing close to 221b Baker St, the fabled home of Sherlock Holmes. However, Holmes was not on my mind, but rather his nemesis, Professor James Moriarty.  I was on my way to a seminar on the future of crime (especially as it concerned robots and social robots) organized by the excellent team at UCL’s Dawes Centre for Future Crime.

The rumour is that Conan Doyle based the character of Moriarty on George Boole, Professor of Maths at University College Cork from 1849 until 1864 (disappointingly his former home on Grenville Place is in a state of dereliction). Boole, one of the great mathematicians, created Boolean algebra which laid the foundations for computer language and is this the structure around which scientists use machines to mimic and ‘improve’ on human behaviour. To that end, Moriarty and the idea of the ‘future of crime’ should be of all the more interest to us in an AI driven world.

Whilst many serious crimes are in abeyance – murder rates in small open democracies (Switzerland, the Czech Republic and Ireland for instance) are very low, and those in the large EU economies – France, Germany, the UK and Italy trend below 1 per 100,000 (the USA is seven times higher), new forms of crime are on the rise, many of them driven by technology.

Mobile phones and social media have become an entirely new vector for crime in terms of the theft of passwords, online bank details, identity theft and the harassment and abuse of people online. Similarly, crypto currencies are the conduit for much criminal activity, and it is believed that confiscations have made the FBI the biggest holder of bitcoin. In addition, the metaverse has opened up an entirely new legal space, where injuries and attacks carried out in an apparently unreal world can have consequences in the real world.

In this context, the issue at stake at the Dawe’s Centre seminar was whether socially intelligent robots can create new forms of crime, that are not yet covered by legal frameworks and for which countermeasures have not been conceived.

It is a terrifying prospect – robots could not only be primed to deliver drugs or explosives but imagine the consequences of robot bodyguards and guard-dogs who take it upon themselves to attack specific targets. In many cases, the prosecution of an assault by a robot would likely follow the logic of charging the owner of a dog (Britain’s Princess Anne was famously prosecuted when her pitbull ‘Dotty’, bit two children). On a more mundane basis, robots increasingly interact with vulnerable humans – the lonely, the elderly and for instance autistic children, and while these interactions are very helpful, they also open up scope for abuse. In time the growing prevalence of sex robots will cause all sorts of controversies.

However, in cases where the robot acts intelligently and autonomously, the law is not clear and frameworks on AI offer only meagre guidance. Robert Harris’ book ‘The Fear Index’ is a good illustration of what might occur when an intelligence ‘bot’ takes over a critical infrastructure, and the use of AI on the battlefield is chilling in its ruthlessness.

Heavy duty robots also permit the exploration, surveillance and protection of faraway places (the deep sea, space and remote parts of the earth) though at the same time they permit mischief by various actors such as attacks on critical marine infrastructure.

It may well be that, as in the case of driverless cars, socially intelligent robots are less dangerous than humans, though the notion itself of criminal robots is cause for concern. For the time being, many of the impetuses to the ‘future of crime’ are human factors. Some of them relate to changes in wealth – such as the rise of powerful oligarchs, and a rich ultra-high net worth class of people who on one hand are targets for crime and on the other hand have the means to dominate others and push their own visions of what society should look like. We may see more of this type of behaviour in coming years.

Then, we really get into ‘Moriarty’ territory when states begin to collaborate with individuals and gangs, as is the case with Russia for instance. States have access to intelligent and often lethal robots and can enable organized criminals to use them – it is not impossible that gangs might start to use drones in contract killings. The other, related area that is increasingly relevant here is corporate intelligence and security (Lewis Sage-Passant’s book on this topic is very good) where many corporate security teams have to combat hacking and other attacks by robots, from criminal gangs, and as was sadly demonstrated in New York last week, assassins.

If Moriarty existed today, and perhaps he does, corporates as well as governments might be his preferred targets, and he would very likely equip himself with robotic henchmen (and women) who could subvert the human world.

The question is, what would Sherlock do?

Have a great week ahead,

Mike

TechTok

Technology now a national strategic issue

There is no better template for the state of the world than the technology industry. In the highly globalized world of the mid 2000’s, Google had one third of the internet search market in China, while today it has close to zero. The internet is becoming multipolar – the US has internet giants that have become stock market monsters, Europe has few tech giants of its own but is leading the regulatory charge on technology, whilst China has on one hand ring fenced its internet space, whilst at the same time generating the world’s leading thriving e-commerce sector and driving tech into social policy.

Technology is interesting in many other respects, but the one I find thrilling is the way it now crosses into every domain – politics, economics, markets and society. It arguably is more pervasive than the new technologies of prior periods of globalization – such as the steam engine and railways (though the technology sector will never match the 60% share of the market capitalization of the US market that railways enjoyed in 1900).

It is not surprising then that there is a broad feeling that tech is too big for its boots. Economically, e-commerce firms like Amazon are big enough to have pricing, scale and distributional advantages that suppress smaller players (and that large ones like Walmart do not seem able to match) and the same may be true of Apple’s Appstore supermarket. Google and Facebook have become advertising behemoths and politically indispensable. Financially, these firms now account for nearly a quarter of the market capitalization of the US market (Apple added USD 170 bn in market capitalization on Friday alone), and as such have become a huge ‘swing’ factor for pension funds, the ETF (exchange traded fund) industry and day traders.

The technology industry, in the USA, India and China, has become the locus of wealth inequality – creating vast fortunes for tech owners. Moreover, tech giants vastly distort both entrepreneurship and innovation. During the testimony of the CEO’s of the large US tech firms to Congress it was revealed that Facebook had adopted a strategy of stifling competitive threats by buying them. The same might be true of other tech behemoths.

If so, the danger here is to stunt the growth of new tech eco systems, to distort innovation in that new companies are built for ‘takeover’ rather than to solve new industry problems, and to hoard the fruits of innovation within a few corporations.

The ‘what to do about tech’ should be clear to anti-trust lawyers and economists concerned with monopoly power. To this end, breaking up the large technology companies in the same fashion as the dismantling of Standard Oil in 1911 or even the Glass-Steagall act of 1933 is an option. Another approach would be to embargo tech giants buying smaller companies so as to give new tech ecosystems a chance to thrive (note how the US failed to develop a 5G ecosystem).

A more likely option is to leave the tech monoliths in place but tax them (and possibly their owners) and harvest the fruits of their superstructures. Ideally this revenue would be funneled to education, digital literacy and cybersecurity. Even the EU sees this opportunity, and plans to fund part of its recent Recovery and Resilience plan with a digital tax – though implementing this will be difficult.

What to do about tech is less clear if you are a politician – technology has replaced television and radio as the way of reaching hearts and minds, the tech community is a source of donations, and in a multipolar world it is a strategic, security related asset. In that context one option is to deepen the ties between the state and the technology complex, as China is doing.

If anything, the signs are that the US will follow the Chinese model, notably so with suggestions that Microsoft might buy TikTok, the increasing use of camera and home security system (from Amazon) data and the growing ties between the likes of Microsoft and the government in cybersecurity.   

If the relationship between American tech monopolies and the state is to become even more symbiotic, it will still have rules. One for example is that in areas where the state has a monopoly, tech will not be allowed to encroach. The best illustration here is the role of the dollar and the failure of Facebook’s Libra payment system to take off.  Another consideration is what vision the large technology companies have for the US – many of them may well prefer a more data intensive world, where technology is even more deeply embedded in governance…which again takes the US towards China’s model.

Where will this leave Europe? By default of not having managed to create its own tech giants (and I am skeptical that it will be able to do so soon) the EU can focus on raising the standards on data protection, digital identity and payment systems. It needs to also make real progress on capital markets union and on incentivizing tech entrepreneurs at a pan EU level so that companies like Stripe can thrive in Europe. If it doesn’t it may become a tech colony, and a paradise for non-tech industries, from tourism to wine to good food!.

Have a great week ahead,

Mike