latentbrief
Back to news
Launch1h ago

DeepSeek V4 Revolutionizes AI Coding with Affordable Pricing

Hacker News1 min brief

In brief

  • DeepSeek has launched its V4 model, an open-source AI system available under the MIT license.
  • Priced at $0.30 per million output tokens, it’s significantly cheaper than competitors like Claude Opus 4.7 ($25) and GPT-5.5 ($30).
  • The model scored 80.6% on SWE-bench Verified, just 0.2 points behind Claude Opus 4.6.
  • Its efficient architecture, including a 1.6-trillion-parameter MoE and optimized inference processes, makes this pricing sustainable without being a loss leader.
  • While self-hosting requires substantial resources, the model’s capabilities in coding tasks like LiveCodeBench and Codeforces are unmatched, challenging the dominance of closed models.
  • However, concerns about benchmark transparency and data governance remain.
    • This release resets the price floor for high-quality coding AI, potentially forcing competitors to adjust their strategies or enhance unique features.

Terms in this brief

MoE
Mixture of Experts — a technique where a large neural network is divided into smaller, specialized sub-networks (experts) that work together to make decisions. This approach improves efficiency and scalability in AI models.
SWE-bench
A benchmark designed to evaluate an AI's ability to fix real bugs found in actual open-source software projects on GitHub. It assesses the practical coding skills of AI systems by presenting genuine engineering challenges, rather than theoretical exercises.

Read full story at Hacker News

More briefs