About
Magic is a San Francisco-based AI company on a focused mission: build safe AGI by automating AI research and code generation at a level that surpasses human capability. Rather than building a general-purpose assistant, Magic concentrates exclusively on frontier code models—combining large-scale pre-training, domain-specific reinforcement learning, ultra-long context (up to 100 million tokens), and advanced inference-time compute techniques. The company's core thesis is that the most reliable path to safe AGI runs through automating the AI research loop itself—using AI to write better AI. To that end, Magic has deployed thousands of NVIDIA GB200 GPUs and raised $515 million from investors including Sequoia, CapitalG, Nat Friedman, Daniel Gross, Elad Gil, Eric Schmidt, and Jane Street. A flagship capability is their 100M-token context window model, enabling developers and AI systems to reason over entire codebases, long research documents, and extended conversation histories without truncation. Magic also publishes an AGI Readiness Policy—a framework for evaluating and mitigating existential risks as capabilities scale. Magic's primary audience is software engineers, enterprise engineering teams, and AI researchers who need deep, context-aware code generation, automated refactoring, and large-scale software understanding. The company is still growing its team and product surface, positioning itself as infrastructure-level AI for the most demanding coding and research workflows.
Key Features
- 100M-Token Context Window: Supports up to 100 million tokens of context, allowing the model to reason over entire codebases, lengthy documents, and extended histories without losing information.
- Domain-Specific Reinforcement Learning: Fine-tuned with RL techniques targeting software engineering tasks, enabling the model to learn from execution feedback and improve code correctness over time.
- Frontier-Scale Pre-Training: Trained at scale using thousands of NVIDIA GB200 GPUs, giving the model broad and deep knowledge of programming languages, libraries, and software patterns.
- Inference-Time Compute: Applies additional computation at inference time to improve output quality, allowing the model to 'think harder' on complex coding and research problems.
- AGI Safety Framework: Publishes and adheres to an AGI Readiness Policy that monitors, evaluates, and mitigates existential risks as the model's capabilities advance.
Use Cases
- Automated code generation and refactoring across large, multi-file enterprise codebases using a 100M-token context window
- AI-assisted software engineering where the model handles complex bug fixes, feature implementations, and code reviews autonomously
- AI research automation, using Magic's models to help design, run, and iterate on machine learning experiments faster than human researchers alone
- Whole-codebase search and comprehension for onboarding engineers or performing deep impact analysis on large legacy systems
- Building AI-powered developer tools and coding agents on top of Magic's frontier models via API integration
Pros
- Industry-Leading Context Length: The 100M-token context window is among the largest available, making it exceptionally suited for whole-codebase understanding and long-horizon software tasks.
- Research-Grade Infrastructure: Backed by $515M in funding and thousands of GB200 GPUs, Magic has the compute and talent to push state-of-the-art code model performance.
- Safety-First Approach: Proactively publishes an AGI Readiness Policy, demonstrating a commitment to responsible scaling that differentiates it from less safety-conscious competitors.
Cons
- Limited Public Product Access: Magic is primarily a research and infrastructure company; its models and tools are not yet broadly available as a self-serve consumer product.
- Enterprise / Research Focus: The scale and pricing of Magic's offerings are likely oriented toward large engineering teams and institutional users, making it less accessible to individual developers or small startups.
Frequently Asked Questions
Magic differentiates itself through an ultra-long context window (up to 100M tokens), frontier-scale pre-training on massive GPU clusters, and domain-specific reinforcement learning—enabling it to handle whole-codebase reasoning and complex software engineering tasks that shorter-context models cannot.
It allows the model to ingest and reason over entire large codebases, lengthy documentation, and extended conversation histories in a single pass, dramatically improving coherence and accuracy on long-horizon coding tasks.
Magic is an AI research company still building toward broader product availability. Access is currently limited; interested developers and enterprises can follow their blog or reach out via their website for updates.
Magic publishes an AGI Readiness Policy that defines how they evaluate model capabilities, monitor for existential risks, and implement mitigations as their models grow more powerful. Safety is treated as a core research priority alongside capability.
Magic is a San Francisco-based team of engineers and researchers. It has raised $515 million from prominent investors including Sequoia, CapitalG, Nat Friedman, Daniel Gross, Elad Gil, Eric Schmidt, and Jane Street.
