Mistral · Mistral Large
Mistral Large
European sovereignty and disciplined function calling.
Mistral Large is a high-performance large language model developed by Mistral, released in 2023 as part of their advanced AI lineup. It specializes in text-based natural language understanding and generation, designed for developers and product teams seeking a versatile solution for demanding tasks.
The model is distinguished by its expansive 128,000-token context window, enabling it to process and generate coherent responses across exceptionally long inputs. Its text-only architecture emphasizes efficiency and precision in language tasks, making it well-suited for applications requiring detailed context retention and advanced linguistic capabilities.
Mistral Large occupies the flagship tier within the Mistral family. It represents a significant improvement over previous models, with its primary advancement being the 128,000-token context window, making it ideal for handling large-scale and context-heavy scenarios.
Background
Mistral AI SAS is a French artificial intelligence (AI) company, headquartered in Paris. Founded in 2023, it has open-weight large language models (LLMs), with both open-source and proprietary AI models. As of 2025 the company has a valuation of more than US$14 billion.
WikipediaSpecs
- Context window
- 128K tokens
- Input ($/1M tokens)
- $2.00
- Output ($/1M tokens)
- $6.00
- Modalities
- Text
- Weights
- Closed
Pricing last synced Apr 27, 2026 via OpenRouter. Confirm against official docs before committing.
Capabilities
- Tool use
- Vision
- Extended thinking
- Prompt caching
- Open weights
What it excels at
Extended context processing
Supports a 128,000-token context window for handling lengthy inputs and maintaining coherence.
Efficient text generation
Produces coherent and structured outputs even for long-form content.
Scalability for diverse tasks
Adapts effectively to large volumes of text across various text-based workflows.
Accuracy in complex tasks
Handles nuanced linguistic challenges with precision and context sensitivity.
Robust real-time performance
Optimized for fast inference across different scenarios requiring high responsiveness.
When to use this model
- →Processing long documents — Handles and analyzes large texts with its extensive context capacity.
- →Comprehensive content creation — Generates well-structured articles, reports, or narratives over extended lengths.
- →Advanced conversational AI — Maintains coherence across prolonged multi-turn dialogues for conversational interfaces.
- →Detailed summarization — Condenses lengthy texts into concise summaries while capturing key details.
- →Research and technical analysis — Processes and synthesizes vast amounts of textual data for use in research or technical fields.
Analysis synthesized from gpt-4o, llama-4-maverick, phi-4, etc.
API model id
mistral-large-latest
Vendor docs: docs.mistral.ai
Compare Mistral Large with
Mistral Large vs Claude Opus 4.7
Anthropic's heavyweight for hard reasoning and agentic work.
Mistral Large vs Claude Sonnet 4.6
The pragmatic default — Claude quality without Opus pricing.
Mistral Large vs Claude Haiku 4.5
Fast, cheap, surprisingly capable for high-volume jobs.
Mistral Large vs GPT-5.4
OpenAI's flagship — broadest modality and ecosystem coverage.
Mistral Large vs GPT-5.4 Mini
GPT-5 economics for high-volume routine tasks.
Mistral Large vs Gemini 3.1 Pro
Google's latest frontier model with expanded reasoning.