Why it matters
- Training models specifically for software engineering (rather than general-purpose LLMs adapted for code) represents a potential quality step change.
- Long-context (5M token) codebase understanding is technically challenging and competitively differentiated.
- $145M raised signals investor conviction in the software engineering AI market.
- Research-to-product path: if Magic achieves truly autonomous software engineering, the market impact would be enormous.
Key capabilities
- LTM-1 model: Long-term memory model with multi-million token context window for codebase-level reasoning.
- Codebase-scale context: Read and reason over entire codebases, not just current files.
- Autonomous coding: AI that can receive a task and implement it across a codebase independently.
- Enterprise deployment: Targeted at large engineering organizations with complex codebases.
- Coding agent research: Active research into multi-step autonomous software engineering.
Technical notes
- Model: LTM-1 and subsequent versions (Magic proprietary)
- Context window: 5M+ tokens (research demos); production varies
- Access: Limited enterprise access; not a self-service product
- API: Enterprise access only
- Funding: $145M raised (Eric Schmidt, Jane Street, others)
- Company: Magic; San Francisco; founded 2022 by Eric Steinberger
Ideal for
- Enterprise engineering organizations interested in early access to long-context AI for large codebase understanding.
- Teams with very large codebases where current tools' context limits are a bottleneck.
- Organizations evaluating next-generation AI coding tools beyond current-generation completion assistants.
Not ideal for
- Individual developers who need a working product today — not a self-serve product.
- Small or mid-size codebases where existing tools (Cursor, Copilot) work well.
- Teams looking for proven, stable AI coding tools in production.