The API War

In early March, Cursor shipped Composer 2 — a proprietary coding model (built on an open weight base model) which beats Claude Opus 4.6 on key benchmarks at one-tenth the price. Three years ago, Cursor was a VS Code fork running entirely on OpenAI’s API.

Cursor’s shift from a dependent customer to a genuine competitor is a microcosm of the most important strategic question on the internet: when should a company expose its capabilities through an API, and when should it keep them closed?

We’ve developed a framework for answering this question. It comes down to two things. First: does opening up your API erode your moat? And if it does: can you find a moat elsewhere?

Anytime a company externalizes its IP through an API it risks at least some of its moat through demand aggregation. Basically, a competitor can use the IP to bootstrap its own product, and once it has sufficiently aggregated demand, vertically integrate by ripping out the API. This is what Netflix did by first licensing shows/movies and then, once it had a sufficient customer base to amortize a large fixed cost over, it produced House of Cards

But the truly dangerous case is when the output of the API can be used as an input to directly compound the quality of a competing product. This is a double whammy because the competitor can use the API to bootstrap/aggregate demand and it can directly improve the production process itself. This is what happens in AI. While OpenAI and Anthropic explicitly contractually forbid a company tapping into their APIs from using their outputs to train a competing model, they cannot stop a company like Cursor from using frontier models to bootstrap the workflow needed to collect proprietary product data and improve its own model over time. 

That appears to be what happened with Composer 2. Cursor used foundation models like Opus and GPT to aggregate sufficient demand to reach roughly $2 billion in annualized revenue, and then used an open source base model, Kimi K2.5, plus continued pretraining and RL gathered from its IDE, to build a frontier-level coding model.

When this output/input dynamic is present the API provider only has two options: either close their API to stop the bleeding or keep it open and find a complement that exerts its own moat. 

Twitter is a great example of a company that followed the first path. It famously started with a generous, freely accessible API — at its peak, developers could pull 500,000 tweets per month at no cost. But Twitter shut most of that off because it leaked its moat: its proprietary social graph. Today, for practical purposes, the API is closed: access is tightly rate-limited, expensive at meaningful scale, and structured so that to build a serious product on top of it you’ll need tightly controlled B2B integrations.

The second path is to keep the API open and complement it with an alternative source of power. There is no industry with a better understanding of how to pull this off than crypto, where APIs are forcibly open. 

The lending protocol Morpho provides a representative example. The protocol was born out of tapping into the open APIs of Aave and Compound and building an optimizer product on top. It then used the output of those protocols — the liquidity it was able to aggregate on top of them — as the input into bootstrapping a platform of its own. In this way, we can see that Cursor and Morpho are identical in how they used APIs to build competing products.

The really interesting dynamic, however, is what Morpho did next. Because Morpho is itself an open API, it needed to find a moat to complement the lack of switching costs. So it decided to make the protocol as aggregatable as possible and instead build a moat through other means — like Lindyness and network effects from deep liquidity from diverse lenders and borrowers. 

Running this framework forward, we can make a prediction: it’s likely that foundation model companies will over time choose path one and increasingly restrict API access for their most frontier models. 

To believe in path two, you’d have to think that models like Opus and GPT are already so powerful and trusted that they can remain open, empower competing models to use their output as input, and still third parties won’t switch away. This could mean model companies bank on other sources of power such as their Lindyness (if they believe their users don’t want to deal with trusting new models), developer network effects (if they believe their users will build sticky ecosystems that leverage the openness of their APIs) or scale economies (if they believe maximizing API calls affords them the ability to amortize the fixed costs of training a frontier model).

But the evidence so far points the other way. There’s still a strong “flavor of the month” dynamic in which customers willingly move to whichever model is currently best — we saw it again in the recent surge in Claude usage after the release of Opus 4.5. And there is not yet much evidence of developer network effects at the model layer — APIs are getting more legible to one another, not less, and the surrounding tooling ecosystem is actively building against lock-in, making switching providers easy by design. And the scale economies in training at the moment have been an insufficient moat since distillation has allowed competitors to train models with similar performance much more cheaply. Without an alternative source of power, foundation AI companies will likely leave only limited access for hobbyist use and concentrate on B2B deployments with tightly controlled usage and monitoring. Or in WarGames speak, the winning move will increasingly be not to play.

This is a concerning outcome, because the current explosion in consumer AI products has been built on top of these model providers. This opens the door to a counterposition. If leading labs increasingly restrict access, there’s value to be picked up by a competitor which chooses a weaker moat but strong guarantees around continued openness. 

Thank you to sysls (openforage) and Alexander Long (Pluralis) for their thoughtful feedback on this article.

Disclaimer
All information contained herein is for general information purposes only. It does not constitute investment advice or a recommendation or solicitation to buy or sell any investment and should not be used in the evaluation of the merits of making any investment decision. It should not be relied upon for accounting, legal or tax advice or investment recommendations. You should consult your own advisers as to legal, business, tax, and other related matters concerning any investment. None of the opinions or positions provided herein are intended to be treated as legal advice or to create an attorney-client relationship. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by Variant. While taken from sources believed to be reliable, Variant has not independently verified such information. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by Variant, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Variant (excluding investments for which the issuer has not provided permission for Variant to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://variant.fund/portfolio. Variant makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. This post reflects the current opinions of the authors and is not made on behalf of Variant or its Clients and does not necessarily reflect the opinions of Variant, its General Partners, its affiliates, advisors or individuals associated with Variant. The opinions reflected herein are subject to change without being updated. All liability with respect to actions taken or not taken based on the contents of the information contained herein are hereby expressly disclaimed. The content of this post is provided "as is;" no representations are made that the content is error-free.