OpenAI: $3.6B revenue - people paying $2.7B for personal subscription (growth rate of 285% per year) - rest is AI
This would mean the latest OpenAI valuation of $156B is a P/E of about 43. For a company growing 285% per year ... that actually doesn't sound horrible. In fact, that's pretty good.
Example: a week ago the natural conversation feature sucked. Now it doesn’t. That’s huge for creating a D0 positive engagement.
As AI gets more tightly integrated into IDEs, and office suites, then stand-alone products like ChatGPT become less relevant, and the underlying provider more of a replaceable commodity. Would anyone really notice or care if Github CoPilot switched overnight to use a Microsoft/Anthropic/Meta model rather than an OpenAI one ?!
You divide the price by earnings.
OpenAI has negative earnings.
It doesn't have a P/E.
And how do you know that?
> they are expected to lose about $5B this year on about $3.7B of revenue
Also, isn't Microsoft getting paid back 75% of the profits first, up to their investment, so $13 billion and then 49% for another ~100 billion?
That is roughly true, but also only matters if Microsoft can influence OpenAI policy and chooses violence^1 (taking profit over investment in technology). Otherwise I would expect OpenAI to keep investing all of their income into more AI. Meanwhile OpenAI is hellbent on reaching effective AGI / pushing towards singularity, as long as they keep making progress and having cheap access to capital profitability is not required. So my personal conclusion is to invest in people selling AI shovels because the madness will continue.
Note 1: in my humble opinion Microsoft choosing violence is highly unlikely with Nadella in change.
That being said, $3.6B is quite a lot of revenue, considering it's mostly from a $20/mo subscription model.
IMHO the bigger risk is the "AI" ending up not doing that much after all or their R&D not paying off(which is a risk, their SORA is nowhere to be found when Chinese AI companies are having its alternative in production. Maybe OpenAI isn't that far ahead and it's the language barrier that gives that impression? I don't know I don't speak Chinese but things are happening outside of the Anglosphere).
Emphasis added.
You spend money on R&D.
The expectation: You make money.
The AI company reality: The value of your propitiatory models drops to LITERALLY ZERO after they are superseded by a new generation of models, or worse, open inference models.
The risk isn't that the models don't work.
The risk is that it increasingly appears that the 'moat' these companies have is only as good as the amount of money they continue to pour into it. and when the moat is gone, so is the company.
I can also run a business where I create a moat by pouring money into it (remember MoviePass?), eg. literally paying people to use your service; but usually you have to have some kind plan that does not involve magical fairies (eg. AGI) in at the very least your pitch deck, to convince people you have a plan which goes from scale -> mega profit.
https://x.com/pika_labs/status/1841143349576941863
Where's Sora?
the only way the valuation is put "in perspective" is if you estimate that we gonna have a race of heavy investments now to get to a point where the models are "good enough" and you no longer need to invest in training new ones, at which point you switch from running a immensely unprofitable business to a immensely profitable one.
the issue is that nobody knows when that will be the case (if ever) and it currently looks like whoever player is best equipped with capital to fund that race will have a winner takes all future ahead so you just place all your chips and hope for the best
Some people prefer to delude themselves in order to not admit this.
Just yesterday [1], I argued ChatGPT was a strong brand just to be downvoted to the bottom. Lol. Imagine being so blatantly wrong at something you are supposedly a specialist at.
Turns out it's only a 3BnUSD / year brand, two years after its inception. Also, who is "Claude"? [2].
In terms of tech, nothing even comes close to "GPT-4o mini" for the same price/performance.
OpenAI will continue to dominate the market for the next decade, at least.
1: https://news.ycombinator.com/item?id=41723208
2: https://trends.google.com/trends/explore?date=now%201-d&geo=...
Microsoft is living with the possibility of OpenAI cutting them off at any time of their choosing, as well as not being in control of a technology which is becoming increasingly important, and they are feeling it. Microsoft is trying to build their own SOTA model internally, and there is every reason to expect they will succeed - they have the GPUs, money, desire, paranoia and talent required to do it, and as we have seen from many players there is no moat.
So, what happens to OpenAI when (not if), Microsoft end their relationship? How do OpenAI sell their product, other than directly, to what extent does it cut them off from enterprise customers, can they financially handle building their own $100B datacenters if they are forced to?
I don't know, but @sama should know (and most likely he does).
I'm not his fan. But am a fan of reality and that's what it is.
It's also not like "some guy" suddenly has to build $100B datacenters and whoops that's an issue. This is a 150BnUSD company, with millions of users and a brand that's recognized worldwide, they have plenty of options to choose from.
I'm sure they could raise $100B if/when they need to, as long as the growth/scaling story stays intact, but $100B is still a lot. They just raised $6B, which alongside a similar amount raised by x.ai is largest raise by a startup afaik. Buying 100K GPUs atm would be a challenge too, due to supply.
In the meantime, per Dwarkesh interview with SemiAnalysis.com Dylan, Microsoft is in process of fiber-connecting multiple datacenters to make a massive meta-cluster, and this is what OpenAI would lose access to if the relationship with Microsoft sours. Amazon (partnered with Anthropic) and Meta, even x.ai, already have massive clusters, so OpenAI could find themselves temporarily in the GPU-poor club if they upset Microsoft after Microsoft make them expendable (or maybe at that point, it's "will" rather than "could").
There is a partnership that benefits both parties, and the legal agreement is most certainly more complicated than you're describing.
I'd agree being the current best, having a lot of revenue, and being the popular origin of generic terms like "GPT" are indeed great examples of being dominant though. Having "a strong moat" means having reasons 5 other companies won't be able to do the same thing in the next 5-10 years and overtake them. History has shown, plenty of times, just being the big brand OG player at the start doesn't provide a big moat in and of itself. If that were the case you'd be talking about how some other company like Google is the dominant player in all things AI right now.
That’s $0.5 trillion revenue rate
Come on, man.
I also pay for better AI. My - and probably your time - and the time saved by using superior tooling, is worth far more than the meagre few dollars spent each month on some subscription.
You're stepping over dollars to pick up dimes.
Because, tbf, the free version is really good. I feel like a lot of AI companies are in the stage where they're still trying to gain massive marketshare and convince people of it's value. The real test is going to be when they pull the plug to force people to transition to premium.
The math comes down to the fact that they raised enough money to stay in business until they have to raise even more money.
And a 180 mil users might seem like a lot in two years, but with the free media coverage they have... even if their free to premium conversion rate is pretty dismal, the "have you heard about them" to "active free user" rate is also very bad!
1. Be extremely generous to customers. Give away incredible tech at bargain basement prices. Pamper your employees with extraordinary benefits. Everyone from all angles loves your business.
2. People integrate your product into their lives. They become dependent on it because the value is so good. They tell everyone else in their lives about it. (2.5 IPO. This is where you make an exit, where those people who "love" your product that everyone else loves want to get in on the investment. Retail runs to buy up your shares. Also could be an acquisition.)
3. Close the hand. Competition has been ruined and people are addicted. Now it's time to raise prices and lower product quality. Initial investors are already out, so retail will carry the blame for the enshitification of your product.
* Text generation is really bad if said text is more niche than typical SEO stuff like "give me 300 words about soccer balls". Anything academic is usually wrong (i.e. full of hallucinations) or lacks/hallucinates proper sources. If I have to check everything anyway I can just use a search engine.
* Image generation is really bad if you don't just want deviantart-like content. Just yesterday I tried generating ideas for visualisations of a few topics (with quite a view different prompt approaches) and they almost all were unusable.
It is good at summing up longform content, though, but then again so am I - and I have to read it anyway to confirm there are no hallucinations...
Obviously many people use it for coding (which appears to be the low hanging fruit because code is so heavily formalized), but I can imagine that market is almost saturated by now.
But this is not their main business model.
Thomson Reuters, Intuit, Palantir, Teledoc, Twilio, ServiceNow, Axon, WPP. All stand to benefit from sophisticated language and vision processing.
I mean, start at step 1: can you name 10 companies related to it?
I disagree, based on my experience
I would have assumed there's more profit in the subscriptions than the API. $20 is roughly 2 million output tokens a month through the API. Given the 50% margin they claim, each user would have to be generating 3-4 million output tokens a month for OpenAI to be at a loss. Is that likely? Seems like a lot of words to me.
Pretty easy when you start having it search websites, documents, code references etc. Before caching, lots of people used up $20 in a day or 2.
did anything good come from wework?
More coworking spaces.
Adam and Rebekah Neumann got exposed for being the frauds that they are.
I'm as cynical as it gets on this forum, as evidenced by my comment history.
But you're comparing these AI companies to WeWork? Really?
WeWork was a real estate company operating in a historically favourable environment (0% interest rates) pretending to be a tech company. They literally rented office space. What do they have in common?
I notice there's a new generation of "grey beard" programmers constantly talking about how "useless" AI is for programming, and they can do everything faster. Meanwhile, there are tons of us out there who are paying $20, $30, $50 per month and upwards for these tool as they are, and wouldn't want to go without. Ever. And we have no idea where it's going to go. Maybe you're missing something?
Comparing to airlines was fine, but you take issue with a comparison to real-estate? WeWork was also beloved by their customers and had leadership who were a bit off the rails.
The fact that switching models is changing one string in AWS bedrock means that nobody is going to be able to charge a significant premium.
I'd rather argue that every hyped topic is polarizing, and your argument can be adjusted to basically every hugely hyped topic.
I'm paying for Claude, which I find super helpful (although mostly for side-projecty stuff rather than dayjob). I'd definitely pay double what it costs today, particularly if I went back to shorter-term environments where I think it shines.
If you can't hook it up to your codebase/you write in a language where it's not great (it doesn't seem good at Elisp at all, at all) then I could see how people might not find the value.
Nonetheless, despite finding these tools useful, I too am sceptical about whether or not there's a valuable business there.
For context, I said this about Uber, WeWork and a bunch of other startups that never really monetised. Note that I also said that about Facebook, where I was completely wrong.
If you are paying attention, this is a pretty terrifying prospect if you are building a $200k+/yr life on the basis that SWE will pay like this forever. A machine is coming along that genuinely might be 80% of your skill set in a few years, makes it very difficult to negotiate generous packages with the remaining 20%.
Something similar happened when COBOL came out. Same with website builders. The big difference between most industries versus software is we aren't even close to satisfying the world's desire for software yet, so increases in productivity just gives us bigger leverage.
Is this supposed to be scary? This scenario is absolute stonks if you're a developer worth your salt. A machine makes me 5 times more productive, and I don't even have to commoditize my complement because they're taking care of that themselves?
Personally I think its cyclical as software is such a key component of communication and automation - and so we'll see future growth periods but probably not to the same extent as the last decade where seemingly every undergrad was compelled to study Computer Science.
The rate of change could make the short term bumpy as companies try to play around with less dev work. Eventually though competition will push companies to raise their productivity to the new baseline (programmers + AI).
One thing that is a danger is if devs ignore what's happening in AI. I remember when Google first came out having to learn the art of querying Google to get what I was looking for. AI/LLMs looks to be the same - ignore learning how to leverage them at your own peril.
That does happen. Obviously no programmers employed at those companies to talk about it on HN though.
Hire a team to build a project, when it's finally satisfied most of your requirements, you progressively cut staff until only Jim is left, and Jim spends the next two decades maintaining the system, growing out his hair and beard, piling kludge on top of kludge, and drinking heavily until retirement.
The HN bubble, focused so intently on BigTech and FAANG, is woefully unaware of how things work basically everywhere else.
Pre-musk, links to random tweets seemed to load almost instantly. Now?
Last time I tried, took 48 seconds to show a "please login or create account" message.
My Performa 5200 in the 1990s booted up faster than that.
If he'd kept things as is, without fiddling, that would've been an improvement over what he actually did.
Still, I'm glad he reduced my compulsive use of the service.
I've always found the time-consuming part of this job to be understanding the context of the change rather than making the change itself. Essentially, trying to understand the existing code and business requirements and how they all fit together. I can definitely see the potential for AI to help make this easier but I haven't found the current tooling to be any good at this.
It's the first tool that came to hand: I'm sure there are better historical comparisons if I bothered to look. For professional reasons, I was well acquainted with WeWork when they were a big deal. It was clear to many of us in advance of their collapse that WeWork was heading for a hard crash based on their lease commitments and other public data. In this case, public data, such as for OpenAI and Anthropic, strongly suggests that, like WeWork, the economics of the businesses don't make sense. There are some fundamentals that no amount of innovation can overcome. Committing to leases you can't conceivably cover is one of them. Spending $2.35 for every $1 of revenue is clearly another, absent some breakthrough.
WeWork is not a perfect example. But if OpenAI flames out, it will be mentioned in the same breath as WeWork. The reasons are not the same, but they do sort of rhyme.
Commodities can be massive businesses with competitive moats. Oil is a commodity BP and Exxon do just fine financially speaking.
In 2024, they are projected to bring in $3.4 billion in revenue and lose $5 billion dollars.
They just had a massive fundraising round for $6.6 billion which at the current costs and growth is what, 6-8 months of spend?
If they bring in $11 billion in 2025, I expect them to lose at least $18 billion dollars. Good luck!
Greenlighting training a new foundation model is very expensive, but is also a human decision that can be postponed based on available capital.
Didn't Sam ask TSMC to spend $7 trillion on new fabs? By comparison, $18 billion/yr spend seems very small.
There are three likely outcomes:
1. AI plateaus. OpenAI slashes the R&D budget to become profitable with revenue in double digit billions and profit in single digit billions. Valuation likely similar to today's.
2. AI doesn't plateau. OpenAI makes a killing. (Hopefully metaphorically, not literally)
3. Scenario 1 or 2, but it's a company other than OpenAI that wins.
Couple that with a lack of pricing power thanks to all the other similar products in the market.
I'm running Phi3, Llama 3.2 and Mistral Nemo locally and they're decent enough for many things.
The Information estimates that OpenAI is spending $4 billion just to run ChatGPT and their APIs, along with $3 billion in training and $1.5 billion in salaries.
https://www.axios.com/2024/10/03/openai-investors-profit-mon...
I don't see how a chatbot meets any of those criteria
> I don't see how a chatbot meets any of those criteria
Calling these things "a chatbot" is likely limiting your vision: some of the stuff people build by fine-tuning LLMs, such as the ones OpenAI offer, use them to generate database queries matching their customers' database schemas.
"Chatbot" is simply a convenient UI for an LLM in the same way that a web browser is a conventional UI for email. (And in this analogy, anyone calling an LLM "autocomplete on steroids" would be making the same mistake as someone saying "Wikipedia is just TCP on steroids").
I expect LLMs to continue to be extremely generic in the same way that web browsers are (market history shows periods where one browser dominates the market despite open source) or like spreadsheets (where Microsoft Office is, or was last I looked, dominant despite free offerings being good enough for most people).
Society needs intelligence. We started using mechanical aids because it became impractical to perform census work by hand as the population increased, AI is a continuation of this process: we need it, it's a commodity, there may be a market opportunity despite free and/or open source competition, and (like Netscape, like Internet Explorer) there's no guarantee that the winner in one year will still be leading the next year or even existing a decade later.
They invested in exploration and now they control those oilfields. They built refineries and have the systems and experienced people to operate them. Meta can't release a LLamaOilfieldAndRefinery which I can operate by just spending a few thousand on gpu's.
People here like to slag on google for being late to the chatbot party, but they've been using ML the entire time and integrating it into various products for ages. I kind of wonder if the only reason they were "behind" was the lawyers were less brazen about the copyright situation.
I agree with the commodity take, and I personally bet on Google eating everybody else's lunch eventually, because there's a lot of other business behind them and they can afford to undercut competitors. They aren't a one trick pony.
The market demands models that don't fail constantly with HAL-like "Sorry, Dave, I cannot do that for you" moralizing responses.
An API that is used for mission-critical purposes and that randomly fails with "HTTP/1.1 406 You Are A Terrorist And/Or Hitler" is a BUG, and the market will coalesce around models that don't have this bug.
[1] "[...] with $1 billion coming from other businesses using its technology." https://www.nytimes.com/2024/09/27/technology/openai-chatgpt...
[2] https://www.bloomberg.com/news/articles/2024-06-12/openai-do...
Which assumes that Google will stand still, instead of cannibalizing its own business model.
Employers will see too much delegation to flawed models - codebases will swell with ai-slop that eventually seizes the business. Skills will atrophy. Internal comms will be similarly impacted, with flooding of generic memos and strategy docs from people pretending to work dripping with that RLHF sheen.
This is such an interesting new industry —- so many comments here about race to the bottom / commodification, and I tend to think that way too, but in practice, I’m very very often like “Meh, ChatGPT is bad at this, I’ll ask Claude”, or vice-versa. We may actually be entering a world where we have different personalities and strengths in very large frontier models. I don’t think it’s easy to confidently predict where all this goes.
Why is Google still printing money on Search 20 years later? Surely at this point the know-how to build a search engine at scale is out there. It is a 2-sided marketplace, first they captured people's habits, then advertiser dollars. It could be that in the end LLM usage will also be ad-driven, in which case this will be captured by OpenAI most likely, similar to the Google case.
Another case. Why is Outlook the market leader for corporate email, even though email protocols are open standards, and there is no shortage of open source alternatives, etc. The reason is bundling of course and various other IT considerations, such as trainig/certs/control/security. Imo we don't really know yet how the LLM space will play out, what will enable (or not) OpenAI to win beyond the first years.
Of course there _were_ cases when the moat wasn't there, or was quickly disappeared, eg. Netscape's business melted away as soon as Microsoft bundled Internet Explorer with Windows.
Personally I think OpenAI still has a good 10x growth ahead (eg. 100M paid users for ChatGPT at the $20/mo-ish pricepoint) if they just maintain the current lead on the rest of the pack, and probably the API income can similarly be scaled up. At the slow-moving Retail company I work at, all the execs have been talking about AI for the last 2 years, but we still don't have a single AI feature in production in any of our apps, so we're not yet contributing to OpenAI revenues. But we will soon, as will 1000s of other slow-moving BigCos.
I’d really like to see as “pay as you go” gateway for popular LLMs. As Bezos said: “your margin is my opportunity.”
However they really are banking on the idea that people pay a bunch up front and use it fairly minimally. This allows them to make profit on the subscribers to pay for queries by free users. I have no idea where the pricing model will go in the future but it wouldn’t surprise me if pricing models become the primary method for fighting for market share as opposed to the AI’s actual ability.
The investors behind OpenAI's historic $6.6B funding round https://news.ycombinator.com/item?id=41726370 - Today (2 comments)
OpenAI wants to build 5-gigawatt data centers, nobody could supply that power https://news.ycombinator.com/item?id=41726970 - Today (3 comments)
Why OpenAI burns through billions https://news.ycombinator.com/item?id=41729038 - Today (0 comments, informative article)
OpenAI's bankruptcy flames linger on as Apple wiggles out of $6.5B funding round https://news.ycombinator.com/item?id=41726224 - Today (0 comments, informative article)
OpenAI is now valued at $157B https://news.ycombinator.com/item?id=41727947 - Today (0 comments, informative article)