tokencost — screenshot of github.com

tokencost

This tool provides client-side token counting and up-to-date USD price estimations for over 400 LLMs, crucial for cost management in LLM applications. It helps track frequently changing provider pricing.

Visit github.com →

Questions & Answers

What is tokencost?
Tokencost is a Python library that provides client-side token counting and USD price estimation for over 400 Large Language Models (LLMs). It helps developers calculate the cost of prompts and completions when using major LLM APIs.
Who should use tokencost?
Tokencost is designed for developers building LLM applications and AI agents who need to accurately track and estimate the monetary cost of API calls. It is useful for anyone concerned with managing expenses related to LLM usage.
How does tokencost ensure accurate pricing?
Tokencost distinguishes itself by actively tracking and updating pricing changes from major LLM providers. It uses official tokenizers like Tiktoken for OpenAI models and the Anthropic beta token counting API for newer Anthropic models (v3 and above) to ensure accurate token counts.
When is tokencost most useful?
Tokencost is most useful during the development and operation of LLM-powered applications, especially when cost optimization is a concern. It allows developers to estimate expenses pre-deployment or monitor costs in real-time, aiding in budget management and model selection.
How does tokencost count tokens?
For OpenAI models, tokencost utilizes Tiktoken, OpenAI's official tokenizer, to split text into tokens and handle message formatting. For Anthropic models above version 3, it uses the Anthropic beta token counting API, while older Claude models are approximated using Tiktoken with the cl100k_base encoding.