In December’22 , when OpenAI pitched to investors, it said that the organisation expects $200M in 2023 and it would shoot to $1B by 2024. At that time, OpenAI was being valued at $20B (reported from a secondary share sale).
Today, this valuation is now being reported at $29B, and its daily user visits to OpenAI and ChatGPT websites have soared to >30M (Source: Gabriel Cortes/CNBC)
Today, in this deep-dive we analyze the revenue making machinery of OpenAI in five sections:
For free users of Productify:
Looking at ChatGPT from product lense
API usage pricing buckets for GPT versions API usage
Rest of the case study sections are for paid users:
Business Model of OpenAI
How does OpenAI make money?
Mind-blowing OpenAI statistics
If you would like to read the full case study, please consider upgrading to a paid user:
If you’re a first-time visitor to Productify, you may want to checkout previous case study related to AI/GPT ChatGPT use-cases for Product Managers
Looking at ChatGPT from a product lense
When I analyze ChatGPT, it functions more like a product demo and a way to gather human feedback for free (As they say - if you’re not paying for the product, you’re the product).
ChatGPT runs on a language model architecture created by OpenAI called GPT (Generative Pre-trained Transformer). When ChatGPT was launched to public, the GPT it used came from GPT-3.5 series. And if you’ve a subscription to ChatGPT Plus, you get access to GPT that comes from GPT-4 series.
GPT-4 allows for many more Jobs to be Done than GPT 3 series such as creating more human-needed outputs such as essays, images, art and music. Also GPT-4 utilize 100K Billion parameters to get trained vs. 175 Billion in case of GPT 3 series.
When it comes to critical thinking and problem-solving, GPT-3.5 only scored 1 on AP Calculus BC exam whereas GPT-4 scored amongst top 10% test takers
When it comes to language proficiency GPT-4 outperforms in almost all relevant world languages, and for example in case of English (85% shot accuracy compared to 70% for GPT-3) it outdoes GPT3.5.
The original GPT-3 models released in 2020 set the max request value at 2,049 tokens. In the GPT-3.5, this limit was increased to 4,096 tokens (which is ~3 pages of single-lined English text). It could only input and output text.
GPT-4 comes in two variants.
One of them (GPT-4-8K) has a context length of 8,192 tokens,
and the second one (GPT-4-32K) can process as much as 32,768 tokens, which is about 50 pages of text.
and also GPT-4 can input and output both text and images.
Exclusive early-access offer for Productify readers from Amplitude:
Download Product-Led Experimentation Guide from Amplitude
Download Product-Led Growth Guide from Amplitude
API usage pricing buckets for GPT versions
For developers and companies, OpenAI offers its APIs on fee basis so that applications can be created on top of it. For such usage, OpenAI charges based on multiple usage tiers. Before we get into tiers, here’s an example of cost per token:
GPT-3 costs 0.0004$ to 0.02$ per 1K tokens
GPT-3.5-Turbo costs 0.002$ per 1K tokens
GPT-4-8K (see previous section for explanation) costs 0.03$ per 1K prompt tokens and 0.06$ per 1K completion tokens
GPT-4-32k (see previous section for explanation) costs 0.06$ per 1K prompt tokens and 0.12$ per 1K completion tokens
The prime difference between prompt tokens and completion tokens is that the prompt tokens are the words that you enter into GPT, whereas the completion tokens are the words of the final content that the software generates for you. It’s also important to note that tokens are raw texts.
While the costs above seem easy to understand, it is actually hard to know how much you might end up spending because of unpredictability of the completion tokens that software might end up generating for you.
But still if you have to guess, here’s an estimate: If you’re processing 100K requests (with average duration of 1500 prompt tokens and 500 completion tokens), you would end up spending
$400 with GPT-3.5-Turbo [calculated as 100K*2000*(0.002/1000)]
$7,500 for GPT-4-8K and $15,000 with GPT-4-32K
Here’s an example of GPT-3 cost buckets
Business Model of OpenAI
OpenAI has adopted a platform business model where most of its focus is on developers and businesses (more B2B and less B2C). As a platform, it promotes the interaction between applications built by businesses+ interaction by direct consumers and its AI models. Better the interaction, better the value to both sides.
Some of the world’s largest companies also operate under a platform business model. Apple, for instance, has created a platform around its hardware products (iPhone, iPad, Watch). It then layered various services, such as the Apple Store, Apple Pay, Arcade, iCloud, and many more, on top of that ecosystem. Each of those services then generate income via transaction fees, subscriptions, and so forth.
Most platforms, often under the auspices of privacy and security, have created enclosed ecosystems. As a result, they can act as gatekeepers and determine what is and what isn’t acceptable behavior.
Once Open AI has amassed network effects, it will be extremely hard to disrupt it given its APIs will be powering world’s biggest businesses. This is also the reason why regulators across the globe have grown increasingly wary of their power while working on laws on how to combat their invasive reach.
How does OpenAI make money?
There are three kind of applications that are being built on top of OpenAI APIs: