fireworks/kimi-k2p5

Common Name: Kimi K2.5

Fireworks
Released on Jan 27 12:00 AMSupportedTool InvocationSupportedReasoning
CompareTry in Chat

Kimi K2.5 is Moonshot AI's flagship agentic model and a new SOTA open model. It unifies vision and text, thinking and non-thinking modes, and single-agent and multi-agent execution into one model. Kimi K2.5 is a mixture-of-experts (MoE) language model with 1 trillion total parameters and a 262K context window.

Specifications

Context
262.1K
Inputtext, image
Outputtext

Performance (7-day Average)

Collecting…
Collecting…
Collecting…

Pricing

Input$0.66/MTokens
Cached Input$0.11/MTokens
Output$3.30/MTokens

Availability Trend (24h)

Performance Metrics (24h)

Similar Models

$1.10/$3.52/M
ctx203Kmaxavailtps
InOutCap

Z.ai's state-of-the-art mixture-of-experts model with 40B active parameters out of 744B total. Optimized for complex systems engineering and long-horizon agentic tasks, using Deepseek Sparse Attention for efficient long-context processing.

$0.66/$2.75/M
ctx256Kmaxavailtps
InOutCap

Kimi K2 0905 is an updated version of Kimi K2, a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Kimi K2 0905 has improved coding abilities, a longer context window, and agentic tool use, and a longer (262K) context window.

$0.62/$1.85/M
ctx160Kmaxavailtps
InOutCap

DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats.

$0.99/$0.99/M
ctx160Kmaxavailtps
InOutCap

A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token from Deepseek. Updated checkpoint.