The following translated article recently appeared in Alternative Libertaire, the monthly magazine of the Union Communiste Libertaire of France.
Generative AI has become ubiquitous in the media and especially on the internet in just a few years. This technology raises ecological and social questions that have been widely discussed before, including in this journal [ 6 ] . But the question we are raising here is: what are the economic impacts of the emergence of this technology ? Who is funding it, for what purposes, and with what results so far, and what are the consequences for the tech industry as well as for the capitalist economy in general ?
The question is worth asking, because generative AI has been the subject of colossal investments by major tech players in recent years, even when compared to the sector’s enormous revenues. The return on investment is therefore being closely scrutinised by the financial world. As we will see, it has been slow to live up to the risks taken.
Since OpenAI announced ChatGPT in November 2022, the hype surrounding these technologies has been considerable, and their supposed impact on the economy has been extensively discussed. Dramatic predictions regarding the consequences for employment are often heard: ChatGPT could put everyone, or almost everyone, out of work; no sector of production is expected to be spared.
However, the anticipated revolution seems to be taking longer than expected. A recent MIT study [ 1 ] examined the adoption of this technology across a wide range of companies. The results highlight two things. First, corporate interest is massive: over 80% of them report having launched at least one project using generative AI internally. This finding is contrasted by the second point: in an overwhelming majority of cases (95% of companies), these projects have remained at the pilot stage and have not seen real adoption in production. More broadly, the study concludes that the arrival of generative AI has only resulted in limited structural changes in most of the sectors studied.
To explain this phenomenon, the study points to a key technical limitation: the inability of these tools to learn from their mistakes and use user feedback to improve their relevance and adapt to the context. Unless a new technological leap is made, the impact of AI on production methods is therefore likely to be confined to a more limited scope than anticipated. This is not good news for the tech industry, which has heavily invested in the economic benefits of generative AI and desperately needs new markets to generate revenue commensurate with its bets.
Systems that are too resource-intensive
Let’s examine the structure of the economic sector that is striving to sell us this supposed new industrial revolution. Today, only a handful of companies are actually developing activities centred around generative AI. They can be roughly classified into three types of activities.
First, there are the developers of predictive models like GPT or Claude. These models are software programs capable of completing text (or audiovisual content such as images) in a ” realistic ” way, meaning similar to the data provided during training. These companies’ activities begin by extracting data (usually from the internet, legally or illegally), then using it in a very expensive training phase to refine their models. This phase requires a massive amount of computational work, necessitating huge server farms equipped with high-performance processors. The company then monetises the use of its models.
The second type of activity involves leveraging models provided by the previous companies to offer a service to individuals or businesses. The best-known product in this category is undoubtedly ChatGPT, a chatbot that uses GPT models to interact with its users. Other services exist, for example, for completing or generating computer code. This category of actors is therefore located downstream in the production chain.
The last category is upstream: these are the companies that sell the computer hardware (primarily processors) necessary for training and using generative AI models. In reality, the plural is superfluous here. One company has managed to carve out a monopoly: Nvidia, the largest designer of graphics cards, supplies almost all the processors used for training and using generative AI models.
These different players therefore have different business models. The question is whether these models are viable from a capitalist perspective. And that’s where the problem lies: of all these companies, only Nvidia manages to turn a profit. All the others are pouring astronomical amounts of money into the industry without managing to find a real market for their products.
Let’s first return to the emblematic case of OpenAI. The company falls into the first two categories we mentioned: it produces a model, GPT5, which can be accessed through different services, placing it in the first category, as well as a chatbot, the famous ChatGPT, which places it in the second.
Anatomy of a bubble
ChatGPT is by far the most popular of all existing AI services, with a reputation comparable to major social networks like Facebook or Instagram. It boasts 400 million active users. However, ChatGPT has two disadvantages compared to social networks: firstly, the advertisements that generate revenue for these platforms are less well integrated. Secondly, the usage costs per user are massively higher. Consequently, even paid ChatGPT subscriptions fall far short of covering usage costs, to the point that each new user increases OpenAI’s deficit.
The company remains vague about its financial results. However, it can be estimated that it earned $4 billion in revenue in 2024 [ 2 ] . But the cost of training and using its models alone would reach $5 billion. Adding other costs such as salaries, this would bring the total expenditure to $9 billion, resulting in a net loss of $5 billion. To offset these losses, OpenAI is raising funds at a frenetic pace, undoubtedly unprecedented in capitalist history. It raised $10 billion in June 2025, before raising another $8 billion in August of the same year.
Despite these lacklustre results, OpenAI is arguably the best-performing company—Nvidia aside. The other models are far less widely used and generate significantly less revenue. Startups attempting to build services based on these models face increasing difficulties: they struggle to provide real added value to other sectors. The few services they do offer are ultimately limited in variety and often resemble some form of chatbot. The exception to this rule is Cursor, an AI-based computer code editor that is experiencing genuine adoption—though it is not yet profitable. Even there, the productivity gains for the IT industry fall far short of the spectacular claims made by their suppliers [ 3 ] .
The reliability of the models also remains a problem: AI-generated code continues to contain errors and security flaws, and text generation continues to suffer from “hallucinations”—for example, it commonly generates false scientific references. These problems are amplified as the task becomes more complex.
Another major problem for these startups is their heavy reliance on access to AI models (GPT, Claude, etc.). However, since model production is currently a financial drain, the companies that supply them could be forced to drastically increase their prices, which would in turn make the already fragile business model of the startups that depend on them even more unsustainable.
Degeneration
To overcome these contradictions, the industry is counting on a new technological leap. But this path seems doomed to failure. The quality of the models depends primarily on the quality and quantity of the input data.
However, the industry is starting to run out of new data: it has already used virtually everything available on the internet. AI is beginning to face a paradoxical problem: an increasingly large portion of its training data consists of data that is itself synthesised by AI, leading to model degeneracy [ 4 ] . It is clear that AI progress is plateauing and that improvements are becoming increasingly marginal. The recent release of GPT5 has only exacerbated these concerns, as the new model has not lived up to its promises [ 5 ] .
Faced with this impasse, OpenAI and its ilk will eventually be forced to restrict free access to models, or even degrade the quality of the service offered for the same price, or even for a higher price – a phenomenon already underway. But their denial is currently leading them to amplify a massive investment policy – by buying ever more equipment – without yet managing to increase their revenue.
The entire sector is in a very precarious position. Tech companies have embarked on a desperate race that has all the hallmarks of a financial bubble. The investments made don’t even constitute usable capital in the long term – the intensive use of processors shortens their lifespan, and the entire fleet will need to be replaced in a few years at this rate. If the bubble were to burst, the sector would be left with an absurd number of servers it wouldn’t know what to do with.
The reasons that led to this vicious cycle are the same underlying reasons that make capitalism a perpetually crisis-ridden system. Since at least the early 2000s, the tech sector has been built on the assumption of continuous hypergrowth. Recent years have seen this hypergrowth falter, and in response, a series of desperate attempts to artificially revive it : with the ” metaverse ,” blockchains and NFTs, and then generative AI. It is becoming clear that this model is reaching the limits of its contradictions.
The shockwave that a collapse of generative AI could produce would have consequences for the economy as a whole, with the most vulnerable, as always, being the first victims. Time will tell whether capitalism will be able to rebound from this crisis as it did from the 2008 financial crisis, or whether, on the contrary, these contradictions will lead to deeper upheavals—for better or for worse.
Nicolas (UCL Caen)
[ 1 ] “ The GenAI Divide: State of AI in business 2025 ”, MIT , July 2025.
[ 2 ] See Edward Zitron, “ There is no AI revolution ”, Wheresyoured.at, February 24, 2025.
[ 3 ] Mike Judge, “ Where’s the Shovelware ?” Why AI coding claims don’t add up ”, Mikelovesrobots.substack.com, September 3, 2025.
[ 4 ] “ Can artificial intelligence collapse on itself ? ”, Le Monde, September 10, 2023.
[ 5 ] Christophe @Politicoboytx, “ ChatGPT-5 threatens to implode the generative AI bubble ”, faketech.fr, August 21, 2025.
[ 6 ] “ Artificial intelligence: AI at the service of the bourgeoisie ”, Alternative libertaire no. 358, March 2025.
