Claude Opus 4.7
Coding / Vision / Agentic Three-Axis Upgrade
64.3% on SWE-bench Pro, 3.75 MP vision resolution, new xhigh effort tier, native 1M context — at the same price as Opus 4.6 ($5/$25 per MTok)
Key Highlights
+13% over Opus 4.6 on a 93-task coding benchmark, including 4 tasks neither 4.6 nor Sonnet 4.6 could solve
6.6 points ahead of GPT-5.4 and 10.1 points ahead of Gemini 3.1 Pro on agentic coding
Image long edge up from ~800px to 2576px (3.75 MP), a step-change for computer-use scenarios
$5 input / $25 output per MTok — identical to Opus 4.6, no price increase
Coding Leap: 28 Early Partners Validate
From long-horizon autonomy to complex tool calls, Opus 4.7 turns 'must watch closely' code work into 'hands-off'
GitHub: +13% on 93 Tasks
13% higher than Opus 4.6 on GitHub's internal 93-task coding benchmark, with 4 tasks neither 4.6 nor Sonnet 4.6 could solve
Cursor: 70% on CursorBench
Cursor's internal benchmark lifted from Opus 4.6's 58% to 70%
Notion: +14% Accuracy, 1/3 Tool Errors
Notion reports +14% accuracy, fewer tokens, tool-call errors reduced to 1/3; first model to pass Notion's implicit-need test
Cognition (Devin): Coherent Hours-Long Work
Opus 4.7 can work coherently for hours without giving up on hard problems
Rakuten: 3x Production Tasks Solved
On Rakuten-SWE-Bench, Opus 4.7 solves 3x the production tasks compared to Opus 4.6
⭐ Imbue: Autonomously Built a Rust TTS
Opus 4.7 built a full Rust TTS engine from scratch — neural net, SIMD kernels, browser demo — and used a speech recognizer to reverse-validate against the Python reference
Vision Capability Breakthrough
Image long edge up from ~800px to 2576px (3.75 MP), 3x over prior Claude models. Send images directly via API, no parameter switch needed
Computer-use Agents Reading Dense Screenshots
Higher resolution lets agents read more UI detail in a single view, reducing scroll/re-capture cycles
Complex Chart Data Extraction
Multi-layer nested charts, tables, dashboards — axis labels and fine details become readable
Document OCR & Layout Recognition
PDFs and scans with small text, footnotes, handwritten annotations — extract text and structure in one pass
UI Screenshot Pixel-Level Comparison
Design-to-implementation comparison, UI regression detection, and other pixel-precise scenarios
Newly Launched Capabilities
New Effort Tier: xhigh
A new tier between high and max for finer-grained reasoning-depth vs latency trade-off. Claude Code defaults to xhigh
/ultrareview Deep Code Review
New Claude Code command: an independent review session that runs through changes end-to-end, finding bugs and design issues
Task Budgets (API public beta)
Developers can set token budgets for Claude to self-allocate priorities on long tasks
Auto Mode Extended to Max Users
A classifier evaluates tool calls before execution — safe ones pass through, risky ones get intercepted for Claude to re-plan
Migration Guide (⭐ Key)
Upgrading from Opus 4.6 to Opus 4.7 is a drop-in replacement (change model ID to claude-opus-4-7), but 4 things to plan ahead
1. Tokenizer Switch
New tokenizer uses ~1.0-1.35x tokens for the same input (content-dependent). Re-evaluate token budgets, don't reuse old numbers
2. Stricter Instruction Following
Opus 4.7 executes instructions literally, no more 'charitable interpretation'. Old prompts may produce unexpected results; prompts and harnesses need re-tuning
3. Thinking API Migration
thinking={type:"enabled", budget_tokens:N} is deprecated; use thinking={type:"adaptive"} with effort parameter
4. Clean Up Old Beta Headers
effort-2025-11-24, fine-grained-tool-streaming-2025-05-14, interleaved-thinking-2025-05-14 are now GA — remove these beta headers
client.messages.create(
model="claude-opus-4-7",
thinking={"type": "enabled", "budget_tokens": 10000}
)
client.messages.create(
model="claude-opus-4-7",
thinking={"type": "adaptive"},
effort="xhigh" # new in 4.7
)
vs GPT-5.4 / Gemini 3.1 Pro
Same-tier flagship comparison (based on Anthropic's published benchmarks)
| Metric | Opus 4.7 | GPT-5.4 | Gemini 3.1 Pro |
|---|---|---|---|
| SWE-bench Pro | 64.3% | 57.7% | 54.2% |
| Input $ / MTok | $5 | See OpenAI | See Google |
| Output $ / MTok | $25 | See OpenAI | See Google |
| Context window | Native 1M | 272K / 1M beta | 1M |
| Max output | 128K tokens | — | — |
Get Opus 4.7 via QCode.cc
Stable API proxy with official pricing, ready to use
Same Price $5/$25
QCode.cc bills at Anthropic's official rates with no multiplier markup
Full Support for New Parameters
Full pass-through of xhigh effort, adaptive thinking, and other new Opus 4.7 parameters
Drop-in Switch 4.6 to 4.7
Change model ID from claude-opus-4-6 to claude-opus-4-7, no other config changes needed
China-Direct with Failover
Multi-node smart routing + circuit breakers, avoiding instability of direct official API access from within China
Try Opus 4.7 Now
Sign up for QCode.cc to get a stable Claude Opus 4.7 API proxy
Related Articles
GPT-5.4 / GPT-5.4 Codex Complete Guide
OpenAI's March 2026 flagship deep-dive and comparison
Claude Agent Teams Collaboration Guide
Multi-Claude parallel collaboration for complex engineering tasks
2026 Agentic Coding Trends
The fundamental shift from conversational assistance to autonomous execution