When talking about short-term profits, people will often make bearish claims about the economics of AI labs, “it’ll get commoditized” or “China will compete your margins down.” However, these concerns are usually not integrated into people’s worldviews about the path to AGI.[1]
I think this is a serious mistake, the economics of AI labs is affected by both the supply and demand for their services. Yes, how good the technology gets is important for understanding the demand side, but the brutal economics of AI labs is the lack of moat AI labs have on the supply side. As I discuss below, this can severely constrain AI labs profits even if they are developing incredible technology. In this article, I want to review a bit why running an AI lab is brutal economics and how I think this will change things on the path to AGI.
Review of the Economics of AI Labs
Running an AI lab is brutal economics because you are trying to invent better AI, but they have little to no moat – that is, no effective way to ensure only you can use your discoveries via patents, trade secrets, network effects etc. Look how competitive the industry is now - the best model went from OpenAI’s o1 to Claude Sonnet 3.5 to Grok 3 to Claude Sonnet 3.7 to Gemini 2.5 Pro, and most sophisticated players switched between models with one-line changes in their .env file. Why is that such a problem? It means when an AI lab makes some breakthrough, they can extract monopoly profits for only a short period of time to make up their R&D costs until a fast-follower lab invents similar technology and undercuts their prices. This undercutting will continue until eventually prices are at inference costs.[2] We saw this dynamic with Deepseek R1 copying OpenAI’s o1 after just 1-2 months at a massively cheaper price.

This puts major AI labs in a tough position. Either LLMs mature or scale indefinitely. If they mature, then any lead will evaporate and fast followers will compete prices down, putting AI labs in the red with their high R&D costs. If LLMs scale indefinitely, then labs will need to continually re-invest huge amounts in compute and R&D like they currently are, which puts them perpetually in the “losing money as a fast-growing startup” mode that they find themselves in right now. The problem is that fast-growing startups need to mature at some point and extract profits somehow, but without a moat it’s not clear how to do that
The main retort people have with concerns about LLMs having no moat is that online search like Google/Bing/etc also has no moat but Google makes a ton of money. I think this claim is just wrong on the substance - search has a very clear quality moat where most people strongly prefer Google. Most economists think this quality moat comes from network effects, the more people use Google, the better their data is and thus the better their search is.
I don’t think this is true in AI, at least not yet. We have seen multiple new AI labs (xAI and Deepseek) go from 0 to arguably at or beyond the frontier OpenAI was at. There are fairly minimal network effects with data when the only real inputs you need are Internet Scale Data, GPUs and researchers, which anyone can buy.
Let’s take seriously the chance that LLMs will scale to AGI, something that can flexibly substitute for human labor. Does that change the brutal economics of the situation? No. The problem of having no moat is a supply-side problem, but LLMs being incredible just means demand will be high. The price war dynamic we analyzed above can happen with something that has incredibly high demand. In fact, that is sort of the usual case in economics. Think about basic goods like water, food, chairs, pencils, etc. There is incredible demand for these goods but no company makes a killing off them because prices are competed down to roughly the cost to provide the good.
Just because human labor is paid $60-70 trillion in wages a year does not mean that AGI labs will capture even a tiny percentage of that amount from automating labor if supply is unrestricted. In fact, in the simple economic model we considered above with price undercutting, the equilibrium is AI firms capturing exactly 0% of the surplus!
We can even see this phenomenon now – the SWE-Lancer benchmark evaluates over 1,400 freelance software engineering tasks from Upwork, collectively costing $1 million to hire freelancing to complete. The top-performing model, Claude 3.5 Sonnet, completes tasks worth approximately $400,000. But it only costs a few dollars for consumers to pay for Claude to complete these tasks. Even though consumer willingness to pay for AGI, does not mean that almost any surplus goes back to the companies building AI.
Implications for the Path to AGI
1. Novelty and Speed Premium are Very High.
When an AI lab makes a breakthrough, they have an incredibly short amount of time where they are a monopolist on this more advanced AI capability. This allows them to charge very high margins and make up their R&D costs and more. Therefore, maintaining a lead and maintaining it for as long as possible is perhaps the core variable in how much profit AI labs can make. Therefore, we should expect firms to put a high premium on going as fast as possible. From an AI safety perspective perhaps this is quite undesirable.
A related point is that AI labs taking very different paths have the chance to extract monopoly profits for a much longer time. For example, SSI, the new AI lab, run by Ilya Sutskever is supposedly “climbing a different mountain.” than other AI labs. If this strategy works and they are able to keep trade secrets, then they would be able to extract monopoly profits for far longer than other AI labs who are all roughly pursuing the same strategy (scaling pre-training, RL post-training).
2. Higher Economic Incentives to Automate AI R&D
Right now, there is a big debate on whether AGI will first self-improve by automating AI research or AGI will first substitute for labor in the general economy. Those arguing for general substitution of labor often appeal to the very large economic benefits of automating labor in terms of wages saved.
However, taking into account the supply side features of AI development flips the sign on this argument. Just because the demand is very high for automating labor does not mean the AI labs will get a high return from it. As we mentioned before, because of unrestricted supply, labs will get prices competed down and almost all the surplus will go to the demand side, strongly dampening the incentive for broad automation.
In fact, the economics with AI labs will, if anything, push towards automation of AI R&D. AI R&D is unique because AI labs will be both the supply side (they sell AI) and the demand side (they buy researchers who work on AI). Therefore, even if the surplus goes to the demand side because of price wars, AI labs still capture the surplus since they are the demand side for AI R&D. Therefore, AI labs actually have a unique incentive to automate AI R&D before economy-wide automation.[3]
3. We might See Major Bankruptcies of AI labs, even after AGI
Right now AI firms are bringing in billions of dollars in investment. Because investors are putting them in startups in high growth mode, they can tolerate large losses in the short term. But as time goes on, and if AI labs struggle to turn a profit in 3-4 years, I think we may see major bankruptcies of AI labs such as OpenAI or Anthropic. Again, this might happen even if Anthropic or OpenAI have something that looks like AGI. After that, you could imagine someone like Google or Meta buying these labs and incorporating them, which would obviously be a major shakeup in the race for AGI .
4. There may be major changes in pricing structure.
Right now, there are a wide variety of models but all sell with a similar pricing structure – some constant price per token. This pricing structure leads to the brutal economics discussed above, where price wars eventually destroy everyone’s margins. Perhaps, after a while, AI labs will try alternative pricing structures like building in advertising into model responses.[2] I’m not sure this will work – if users dislike ads enough, then firms will just get into the same brutal economics but instead of competing on price, they compete on how little ads they show you. But it might, and I think those taking AGI seriously should think if these ad-focused models are encouraged, discouraged etc. [5] If ads are undesirable, now might be a particular high-leverage time to pass laws banning ads since you might face little resistance right now.
Conclusion:
AGI is strikingly plausible in the next few years, but will AI labs be primary beneficiaries of that? I’m not so sure, and I’ve tried to argue why the brutal economics of AI labs won’t change even after AGI.
Footnotes
[1] It seems the people who are bearish about the economics tend to be bearish about the technology (c.f Gary Marcus). I want to stake out the position that is bearish about the economics but bullish about the technology.
[2] There could be some pricing of fixed costs in as well, but this makes the situation even worse for the leading lab because the fast follower has lower fixed costs since they don’t need to spend as much on R&D.
[3] At a technical level, these results show that the economics of AI labs is not well approximated by a social planner because of the externalities involved in AI R&D. These problems cannot be fixed by including a wedge parameter because the competitive dynamics introduce differential incentives for different sectors of automation, rather than a uniform slowdown. We need more microeconomic models of AI labs to complement the macro models!
[4] Or alternatively, model providers will try to capture other parts of the stack.
[5] For what it’s worth, my intuition is that having AI agents whose minds are modified to like certain products would be pretty dystopian, but I need to think about this more.