Official public pricing only
We use public pricing pages and documentation. Unclear or changing data is flagged for review.
Weekly checks
We review pricing sources each week and check changes before updating live figures.
Fair comparisons
Subscription and credit-based tools are shown separately when a fair per-token comparison is not possible.
Estimate your prompt cost
Enter your prompt, choose your expected reply length to get your estimate.
What your estimate includes
Actual results will vary based on prompt and reply length and current pricing.
Estimate summary
- Estimated input tokens
- 25
- Estimated output tokens
- 2,500
- Estimated total tokens
- 2,525
Best comparable result
Mistral API — Mistral Small 3.2 currently shows the lowest fair token-based estimate for this prompt.
- Estimated input cost
- $0.000003
- Estimated output cost
- $0.00075
- Estimated total
- $0.000753
Pricing confidence
High confidence
Tools with subscription or credit pricing may be labelled separately instead of ranked.
Real-world prompt examples
Illustrative examples only. Token counts vary by provider and tokeniser. “Cheapest comparable” only appears where a fair per-token comparison exists.
| Length | Estimated tokens | Example prompt |
|---|---|---|
| Short | in 28 / out 80 | Explain what a closure is in JavaScript in two sentences. |
| Short | in 22 / out 90 | Write a polite email declining a meeting next Tuesday. |
| Medium | in 140 / out 650 | Outline a test plan for a Next.js API route that validates JSON with Zod. |
| Medium | in 180 / out 700 | Write a one-page project brief for a small garden redesign with a deck, paving, lighting, and a low-maintenance planting plan. |
| Long | in 900 / out 1,200 | Summarise these meeting notes, group the decisions, risks, and actions, and turn them into a clean client update. |
| Long | in 1,200 / out 1,500 | Review this feature specification, identify gaps, edge cases, and contradictions, and suggest a cleaner version for engineering. |
| Very long / complex | in 3,800 / out 2,600 | Given this repo context, these errors, and these requirements, propose the safest implementation plan, explain risks, and generate the first pass of code changes. |
| Very long / complex | in 4,500 / out 3,200 | Analyse this long contract, compare it against our commercial terms, identify problem clauses, suggest fallback wording, and draft a negotiation summary. |
Why one prompt can cost more on one tool than another
Input and output
You pay for what you send in and what you get back. Longer replies usually mean more output tokens, and output is often priced higher than input on public API tables.
Different models, different rates
Each provider sets its own prices. A smaller or older model is not always worse; it is often cheaper for a first draft or a simpler task.
Subscriptions and credit plans
Some tools wrap models in a monthly plan or credit bundle. That is real spend, but it is not the same as a simple per-token rate for every user, so we treat it separately.
Three comparison types
Exactly comparable: clear public input and output pricing. Estimated comparable: pricing can be normalised with stated limits. Not directly comparable: bundled, seat-based, or mixed billing, explained separately rather than ranked as “cheapest”.
How we keep this honest
- Sources: official public pricing pages and documentation only.
- Review: when a pricing page changes or looks unclear, we flag it for review before updating live figures.
- Comparison rule: we only show “cheapest comparable” where a fair per-token comparison exists.
- Limits: prices can change without warning. Treat the result as a planning estimate, not a quote.
Frequently asked questions
What is a token?
A token is a small chunk of text a model reads or writes. It is not always a whole word; punctuation, code, and spacing can split differently from what you expect.
Why can output cost more than input?
You are charged for what you send and what you get back. Longer replies usually mean more output tokens, and output is often priced higher than input on public API tables.
Why do you need my email?
We use it to tie each saved estimate to you so results are not public to everyone. We only check that it looks like a valid email address. Your browser can remember it on this device, and we also set a secure cookie so you do not have to type it every time.
Is pricing live?
No. We check public pricing sources on a weekly schedule. Your actual bill can still differ if a provider changes rates, tiers, taxes, or add-ons between checks.
Are all tools directly comparable?
No. Seat plans, bundled credits, and mixed billing are not honestly reduced to one flat per-token rate for everyone, so we explain those separately instead of forcing a fake ranking.
How accurate are the estimates?
Token counts are rough budgeting figures, not exact tokenizer output. Costs use published pricing where we can apply it fairly. Use the result as guidance, not a quote.
Do you store my prompts?
No. We use prompts to generate the cost estimate and select the tool. Even though prompts aren't stored, do not write or paste secrets or personal data you would not email to yourself.
Why is there no single cheapest tool for everything?
Because pricing models differ. Some tools sell direct API usage, whilst others package models into subscriptions or credits. We only rank prices where the comparison is genuinely fair.
Before you prompt, see the cost
Paste your prompt and get an estimate of the token and dollar cost.