By Fozzels Team

ChatGPT pricing (OpenAI)

The system gets its data from the GPT-3 and GPT-4 systems at OpenAI. This is done through their API (application programming interface). As a customer of Fozzels, you will need to link your own OpenAI-account, and you will need to pay OpenAI separately for the texts that are generated. is the platform where you can manage your text generations, and automate the content writing for your online store.

Table of Contents

OpenAI offers multiple language models, each with different capabilities and price points. Ada is the fastest model, while Davinci is the most powerful.

Prices shown in the table are per 1000 tokens. You can think of tokens as pieces of words, where 1000 tokens is about 750 words.

Model Training Usage
Ada $0.0004 / 1000 tokens $0.0016 / 1000 tokens
Babbage $0.0006 / 1000 tokens $0.0024 / 1000 tokens
Curie $0.0030 / 1000 tokens $0.0120 / 1000 tokens
Davinci $0.0300 / 1000 tokens $0.1200 / 1000 tokens
OpenAI ChatGPT pricing table as of 18 january 2023

What costs to expect for using ChatGPT

To get a feel for the costs for automatically generating product description texts for your online store using, see the table below.

As an example: 1000 product description texts, with 100 words each, generated by the “Davinci” ChatGPT engine, will cost around $2,66.

Model Note Price Pr tokens Is ~ words Costs per word Costs per text of 100 words 1000 products texts of 100 words generated
Ada Fastest $0.000400 1000 750 $0.000000533 $0.000053333 $0.053333333
Babbage   $0.000500 1000 750 $0.000000667 $0.000066667 $0.066666667
Curie   $0.002000 1000 750 $0.000002667 $0.000266667 $0.266666667
Davinci Most powerful $0.020000 1000 750 $0.000026667 $0.002666667 $2.666666667
Table with estimated pricing for 1000 product description texts

What’s a token?

You can think of tokens as pieces of words used for natural language processing. For English text, 1 token is approximately 4 characters or 0.75 words. As a point of reference, the collected works of Shakespeare are about 900,000 words or 1.2M tokens.

To learn more about how tokens work and estimate your usage: a) Experiment with OpenAI’s interactive Tokenizer tool; or b) log in to your OpenAI-account and enter text into the Playground. The counter in the footer will display how many tokens are in your text.

Which model should I use?

While Davinci is generally the most capable model, the other models can perform certain tasks extremely well and, in some cases, significantly faster. They also have cost advantages. For example, Curie can perform many of the same tasks as Davinci, but faster and for 1/10th the cost. We encourage our customers to experiment to find the model that’s most efficient for your application. Visit OpenAI’s documentation for a more detailed model comparison.

OpenAI’s GPT-3 models can understand and generate natural language. OpenAI offers four main models with different levels of power suitable for different tasks. Davinci is the most capable model, and Ada is the fastest. While Davinci is generally the most capable, the other models can perform certain tasks extremely well with significant speed or cost advantages. We recommend using Davinci while experimenting since it will yield the best results. Once you’ve got things working, we encourage trying the other models to see if you can get the same results with lower latency.


Most capable GPT-3 model. Can do any task the other models can do, often with higher quality, longer output and better instruction-following. Also supports inserting completions within text. Davinci is the most capable model family and can perform any task the other models can perform and often with less instruction. For applications requiring a lot of understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more per API call and is not as fast as the other models. Another area where Davinci shines is in understanding the intent of text. Davinci is quite good at solving many kinds of logic problems and explaining the motives of characters. Davinci has been able to solve some of the most challenging AI problems involving cause and effect. Good at: Complex intent, cause and effect, summarization for audience.


Curie is extremely powerful, yet very fast. While Davinci is stronger when it comes to analyzing complicated text, Curie is quite capable for many nuanced tasks like sentiment classification and summarization. Curie is also quite good at answering questions and performing Q&A and as a general service chatbot. Good at: Language translation, complex classification, text sentiment, summarization


Babbage can perform straightforward tasks like simple classification. It’s also quite capable when it comes to Semantic Search ranking how well documents match up with search queries. Good at: Moderate classification, semantic search classification


Ada is usually the fastest model and can perform tasks like parsing text, address correction and certain kinds of classification tasks that don’t require too much nuance. Ada’s performance can often be improved by providing more context. Good at: Parsing text, simple classification, address correction, keywords. Note: Any task performed by a faster model like Ada can be performed by a more powerful model like Curie or Davinci.

Header-image OpenAI pricing
Header image - Chatgpt OpenAI
Share this article:

More posts

Phone: +31 35 799 4336