Prompt caching
What's Prompt cache
Prompt caching allows you to reduce overall request latency and cost for longer prompts that have identical content at the beginning of the prompt.
"Prompt" in this context is referring to the input you send to the model as part of your chat completions request. Rather than reprocess the same input tokens over and over again, the service is able to retain a temporary cache of processed input token computations to improve overall performance. Prompt caching has no impact on the output content returned in the model response beyond a reduction in latency and cost.
For supported models, cached tokens are billed at a discount on input token pricing and between 20% to 70% discount on input tokens.
Caches are typically cleared within 5-10 minutes of inactivity and are always removed within one hour of the cache's last use.
Prompt caches aren't shared between Infron AI users.
Getting started
Anthropic Claude
Caching price changes:
Cache writes: charged at 1.25x the price of the original input pricing
Cache reads: charged at 0.1x the price of the original input pricing
Prompt caching with Anthropic requires the use of cache_control breakpoints. There is a limit of 4 breakpoints, and the cache will expire within 5 minutes. Therefore, it is recommended to reserve the cache breakpoints for large bodies of text, such as character cards, CSV data, RAG data, book chapters, etc. And there is a minimum prompt size of 1024 tokens.
Click here to read more about Anthropic prompt caching and its limitation.
The cache_control breakpoint can only be inserted into the text part of a multipart message.
System message caching example:
User message caching example:
OpenAI
Caching price changes:
Cache writes: no cost
Cache reads: (depending on the model) charged at 0.25x or 0.50x the price of the original input pricing
Click here to view OpenAI's cache pricing per model.
Prompt caching with OpenAI is automated and does not require any additional configuration. There is a minimum prompt size of 1024 tokens.
Click here to read more about OpenAI prompt caching and its limitation.
Grok
Caching price changes:
Cache writes: no cost
Cache reads: charged at 0.25x the price of the original input pricing
Click here to view Grok's cache pricing per model.
Prompt caching with Grok is automated and does not require any additional configuration.
Google Gemini
Implicit Caching
Gemini 2.5 Pro and 2.5 Flash models now support implicit caching, providing automatic caching functionality similar to OpenAI’s automatic caching. Implicit caching works seamlessly — no manual setup or additional cache_control breakpoints required.
Pricing Changes:
No cache write or storage costs.
Cached tokens are charged at 0.25x the original input token cost.
Note that the TTL is on average 3-5 minutes, but will vary. There is a minimum of 1028 tokens for Gemini 2.5 Flash, and 2048 tokens for Gemini 2.5 Pro for requests to be eligible for caching.
Official announcement from Google
To maximize implicit cache hits, keep the initial portion of your message arrays consistent between requests. Push variations (such as user questions or dynamic context elements) toward the end of your prompt/requests.
Last updated