For full functionality of this site it is necessary to enable JavaScript. Here are the instructions how to enable JavaScript in your web browser.
Skip to main content
Back to Market and Insights

Artificial Intelligence (AI) is no match for the irrationality of markets

17th July, 2023

Published in The Sunday Times on July 16th 2023.

The first time I saw a market panic up close was twenty-five years ago this summer. I was working in the investment division of a bank.

Long-Term Capital Management (LTCM) was a multibillion-dollar hedge fund that was at the cutting edge in terms of financial sophistication. Its rise and fall is a tale of genius and hubris. The story is made for the movies (and yes, there is one): ravaged fortunes, Nobel laureates castigated, and systemic risk threatening to overwhelm some of Wall Street's biggest and best institutions.

LTCM’s main strategy was to find pairs of bonds that had a predictable spread between their prices. When that spread widened, LTCM placed heavily leveraged bets that the two prices would converge. They believed markets were rational and that historical patterns were bound to repeat, which they did. Until they didn’t.

LTCM – a stunning track record

It built an astonishing track record. One dollar invested in 1993 had quadrupled to four dollars at the peak in early 1998. Relative-value funds try to immunise themselves from general market movements, but need to employ significant amounts of leverage to generate returns. And the leverage LTCM used was extraordinary. By the end of 1997, the fund had about $4 billion in equity and a staggering $150 billion in assets (nearly forty-to-one leverage).
In the summer of 1998, everything changed. A currency crisis in Asia spread, and Russia defaulted on some of its debts. As fear grew, investors sought the safety of Treasury bonds. LTCM, which had calculated with mathematical certainty that it was unlikely to lose more than $35 million on any single day, dropped $553 million on a Friday in August. With the fund teetering on the edge and threatening to bring a host of financial institutions from New York to Switzerland with it, the Federal Reserve stepped in to coordinate a €4bn bailout.

The primary lesson commonly drawn from the LTCM fiasco is that the combination of tremendous leverage and illiquid markets is a very dangerous one. That’s true, but far too obvious and simplistic to be helpful. 

What lessons should we draw from LTCM?

Roger Lowenstein’s compelling account of the debacle “When genius failed”, sheds a different light. It reaches beyond the market backdrop to say something universal about risk and triumph, about hubris and failure. It highlights the great limitations of mathematical constructs in dealing with human beings whose hopes, greed and fears are implicit in the gyrations of financial markets.

The 25th anniversary of LTCM comes at an interesting time. 2023 has seen a real breakthrough in generative artificial intelligence (AI). I can’t claim to know much at all about AI, but at its core ChatGPT is just statistics. Statistics, trained on an inconceivable amount of data.

AI and ChatGPT

ChatGPT is a word prediction model – it can very accurately restate what it has seen before. But it doesn’t understand what it has memorised. It is a poor predictor of future returns because — having been trained on data up to 2021 — it has no ability to process real-time stock market data. (It has also been programmed not to give investment advice.)

The next obvious stage of development is to see whether it can help predict financial markets when it is using real-time data. And this is where the intersection with LTCM becomes relevant.

Several AI-powered ETFs have been launched with much fanfare over the past few years. They use machine learning, sentiment analysis and natural language processing to identify patterns and trends to help select assets. However, the performance track record of these has been very patchy. This is not particularly surprising. Active fund managers have been trying to beat the market for several decades with limited success. 

ChatGPT won’t help solve financial markets 

By and large, financial markets are efficient because they are extremely competitive. The efficiency eliminates the predictability in returns. New information is incorporated into prices very quickly and makes future price movements very difficult to predict.

AI and machine learning were first employed by hedge funds decades ago, well before the recent hype. The ability of AI to vastly improve aspects of the fund management profession seems certain. But that is no elixir for generating outsized returns. The nature of competitive markets is, to use the economist joke about efficient markets, that there are no $20 bills lying on the ground, as somebody would have already picked them up. AI may become an arms race - the dollar bills lying around may be picked up faster. But I can’t see how it will increase the number of them.

With human beings at the core of financial markets, there’s a complexity that mathematics has struggled to unravel. In the book, Lowenstein captures this problem wonderfully by quoting G. K. Chesterton, the English writer who called life ''a trap for logicians'' because it is almost, but not quite, reasonable. ''It looks just a little more mathematical and regular than it is…Its exactitude is obvious, but its inexactitude is hidden; its wildness lies in wait.'' On that note, we should proceed with great caution.

Gary Connolly is Investment Director at Davy. He can be contacted at or on Twitter @gconno1.

Other articles you may like