Cerebras

Tokenomics analysis of the Cerebras WSE-3 for inference with leading open source models. Source: SemiAnalysis Tokenomics team.

Read the Full Article
Model
B params
layers
96.3k tokens
k tokens

Default (96.3k) is the P50 input sequence length from our internal testing across Claude Code, Codex, Cursor, OpenCode, and Pi. Output tokens are derived from the workload mix in the Cost to serve panel below.

requests
%
$M / system
Systems needed to run one copy of the model
42
Memory composition1.84 TB total
KV cache 97%
Weights
60 GB
KV cache
1.78 TB
Est. Capex cost (@ $1.00M / WSE-3)
$42M
Layers / wafer
0.86
Weight quantization
KV cache quantization
WSE-3 compute
Feasibility phase diagram
32 systems64 systems128 systems
16k64k256k1M10M125B250B500B1T2T4TAvg ISL (log)Model size (log)DeepSeek V4@1MDeepSeek V3@1MKimi K2.6@1Mgpt-oss 120B@1MGLM 4.7@1M
Cost to serve 1M tokens

Hardware cost per Mtok = (systems × $41.96/hr) ÷ cluster throughput. Each concurrent request splits its wall-time between output decode (at the Interactivity rate) and non-output tokens (cache reads, cache writes, input prefill, all at the Cache read throughput).

Note on interactivity: Interactivity (output decode rate) and Cache read throughput both feed the hardware cost. In the market, interactivity also drives selling-price tier (faster = premium), but this calculator leaves selling price under your control.

Note on cache read throughput: We assume cache read throughput is 5x to 20x faster than the output decode rate. That ratio is reflected in pricing: cache-read tokens are typically priced at roughly 1/10 to 1/20 of output tokens across the listed providers.

tok / sec

Inputs from the first card: the Avg ISL and Concurrent requests values you set above feed directly into this cost calculation. As ISL and concurrent requests grow, the output throughput each request can sustain drops (more KV bandwidth contention per decode step), so the interactivity here is the effective rate after that contention.

tok / sec
vs output:

Assumes prefill runs on Cerebras at the WSE-3 $/hr above. In practice prefill can be offloaded to a separate GPU or Trainium fleet at a lower $/hr, which would reduce the hardware cost per Mtok beyond what this calculator shows.

$ / hr / system
Pricing:
Workload:
Token volume (%)
Token spend (%)
Token type$/MtokMix (%)Token spend ($)Token spend (%)
Cache read
$
$0.16655.8%
Cache write
$
$0.08227.7%
Input
$
$0.0072.4%
Output
$
$0.04214.1%
Total100%$0.297100.0%
Selling price per Mtok (blended)
$0.30
WSE-3 hardware cost per Mtok
$0.095
Margin per Mtok
$0.20
Margin %
68%

Where the workload mix comes from

Token-type breakdown observed across our internal workload and spend on the leading coding assistants (Claude Code, Codex, Cursor, OpenCode, Pi). Cache reads dominate by volume but cache writes and outputs disproportionately drive spend.

Token Mix by Volume
Click to enlarge.
Token Mix by Cost (current pricing)
Click to enlarge.

Where our ISL assumption comes from

Input sequence length distribution across agentic coding harnesses (Claude Code, Codex, Cursor, OpenCode, Pi). P50 lands at ~96.3k tokens.

Click to enlarge.

Where our OSL assumption comes from

Output sequence length across the same harnesses. P50 lands at ~213 tokens: most turns are short replies.

Click to enlarge.

Where our interactivity assumption comes from

Interactivity (output tok/s) on Cerebras: smaller models go faster, larger models go slower.

Source: Artificial Analysis: Cerebras provider page.

Click to enlarge.
Per-model interactivity defaults used in the calculator

Fit to interactivity ≈ 3007 / active_params_B0.234 using gpt-oss 120B (2059) and GLM 4.7 (1201) as anchors. DeepSeek V3/V4 bumped slightly above the curve to reflect MLA's smaller per-step KV bandwidth.

ModelActive paramsInteractivity (tok / sec)
DeepSeek V480 B1,150
Kimi K2.632 B1,400
gpt-oss 120B5.1 B2,059
GLM 4.732 B1,201
DeepSeek V337 B1,350

Cerebras WSE-3 vs other chips

Comparing chip and system specs.

Click to enlarge.