Cursor, the innovative coding platform developed by the startup Anysphere, has made a significant leap in the realm of AI-assisted programming with the introduction of its first proprietary large language model (LLM), named Composer. This release is part of the broader Cursor 2.0 platform update, which aims to enhance the coding experience for developers by integrating advanced AI capabilities directly into their workflows. Composer is designed to execute coding tasks with remarkable speed and accuracy, marking a pivotal moment in the evolution of software development tools.
At its core, Composer is engineered to operate efficiently in production-scale environments, showcasing its maturity and stability through its use by Cursor’s own engineering team in daily development tasks. This internal adoption serves as a testament to the model’s reliability and effectiveness, providing a strong foundation for its deployment across various coding scenarios.
One of the standout features of Composer is its impressive performance metrics. According to Cursor, the model completes most interactions in less than 30 seconds, maintaining a high level of reasoning ability even when navigating large and complex codebases. This rapid response time is particularly crucial in fast-paced development environments where time is of the essence. Moreover, Composer is touted to be four times faster than other similarly intelligent systems, a claim that positions it as a leader in the competitive landscape of AI coding assistants.
The architecture of Composer is built around “agentic” workflows, which involve autonomous coding agents that can plan, write, test, and review code collaboratively. This approach not only streamlines the coding process but also enhances the overall productivity of development teams. Previously, Cursor relied on various leading proprietary LLMs from companies like OpenAI, Anthropic, Google, and xAI to support its “vibe coding” methodology—where AI assists users, including those without formal training in development, in writing or completing code based on natural language instructions. While these options remain available, Composer represents a significant advancement in Cursor’s capabilities.
To evaluate Composer’s performance, Cursor employs an internal benchmarking suite known as “Cursor Bench.” This evaluation tool is derived from real developer agent requests and measures not only the correctness of the code generated but also the model’s adherence to existing abstractions, style conventions, and engineering practices. On this benchmark, Composer achieves frontier-level coding intelligence while generating at an impressive rate of 250 tokens per second. This speed is approximately twice that of leading fast-inference models and four times faster than comparable frontier systems.
Cursor has categorized various models into groups such as “Best Open,” “Fast Frontier,” “Frontier 7/2025,” and “Best Frontier.” In this classification, Composer matches the intelligence of mid-frontier systems while delivering the highest recorded generation speed among all tested classes. This distinction underscores Composer’s unique position in the market, combining both speed and intelligence in a way that few other models can match.
The development of Composer is rooted in advanced machine learning techniques, specifically reinforcement learning (RL) and a mixture-of-experts (MoE) architecture. Research scientist Sasha Rush, who played a pivotal role in the model’s development, explained that the team utilized RL to train a large MoE model specifically tailored for real-world coding tasks. This design allows Composer to operate efficiently at production scale, addressing the complexities inherent in software development.
Unlike many traditional machine learning systems, which often rely on abstracted datasets, Composer was trained on actual software engineering tasks. During its training phase, the model operated within full codebases, utilizing a suite of production tools that included file editing, semantic search, and terminal commands. This hands-on approach enabled Composer to tackle complex engineering problems effectively. Each training iteration involved solving concrete challenges, such as producing code edits, drafting plans, or generating targeted explanations.
The reinforcement learning loop employed during Composer’s training optimized both correctness and efficiency. The model learned to make effective tool choices, leverage parallelism, and avoid unnecessary or speculative responses. Over time, Composer developed emergent behaviors, such as running unit tests, fixing linter errors, and performing multi-step code searches autonomously. This capability allows Composer to function seamlessly within the same runtime context as the end-user, making it more aligned with real-world coding conditions, including version control, dependency management, and iterative testing.
Composer’s journey from prototype to production began with an earlier internal model known as Cheetah, which Cursor used to explore low-latency inference for coding tasks. Cheetah served as a foundational step, primarily focused on testing speed. According to Rush, while Cheetah demonstrated impressive speed metrics, Composer has evolved to be “much, much smarter,” significantly enhancing reasoning and task generalization capabilities.
Developers who had the opportunity to work with Cheetah during its early testing phase noted that its speed fundamentally changed their workflow. One user remarked that it was “so fast that I can stay in the loop when working with it.” Composer retains this responsiveness while extending its capabilities to include multi-step coding, refactoring, and testing tasks, thereby offering a more comprehensive solution for developers.
The integration of Composer into the Cursor 2.0 platform marks a significant milestone in the evolution of the coding environment. The updated platform introduces a multi-agent interface that allows up to eight agents to run in parallel, each operating in isolated workspaces using git worktrees or remote machines. Within this system, Composer can serve as one or more of those agents, performing tasks independently or collaboratively. This flexibility enables developers to compare multiple results from concurrent agent runs and select the best output, fostering a more dynamic and efficient coding process.
Cursor 2.0 also includes several supporting features that enhance Composer’s effectiveness. The In-Editor Browser allows agents to run and test their code directly within the integrated development environment (IDE), forwarding Document Object Model (DOM) information to the model. Improved code review tools aggregate diffs across multiple files for faster inspection of model-generated changes, while sandboxed terminals isolate agent-run shell commands for secure local execution. Additionally, a voice mode feature adds speech-to-text controls for initiating or managing agent sessions, further streamlining the user experience.
From an infrastructure perspective, Cursor has made significant advancements to support Composer’s training at scale. The company built a custom reinforcement learning infrastructure that combines PyTorch and Ray for asynchronous training across thousands of NVIDIA GPUs. This setup includes specialized MXFP8 MoE kernels and hybrid sharded data parallelism, enabling large-scale model updates with minimal communication overhead. As a result, Cursor can train models natively at low precision without requiring post-training quantization, which improves both inference speed and efficiency.
Composer’s training relied on hundreds of thousands of concurrent sandboxed environments—each representing a self-contained coding workspace—running in the cloud. The company adapted its Background Agents infrastructure to dynamically schedule these virtual machines, accommodating the bursty nature of large reinforcement learning runs. This robust infrastructure ensures that Composer can deliver consistent performance, even under demanding conditions.
For enterprise users, Composer’s performance improvements are complemented by infrastructure-level changes across Cursor’s code intelligence stack. The company has optimized its Language Server Protocols (LSPs) for faster diagnostics and navigation, particularly in Python and TypeScript projects. These enhancements reduce latency when Composer interacts with large repositories or generates multi-file updates, making it a valuable tool for organizations that rely on extensive codebases.
Enterprise users also benefit from administrative control over Composer and other agents through team rules, audit logs, and sandbox enforcement. Cursor’s Teams and Enterprise tiers support pooled model usage, SAML/OIDC authentication, and analytics for monitoring agent performance across organizations. This level of oversight is crucial for businesses looking to integrate AI-driven coding solutions into their existing workflows while maintaining compliance and security standards.
Pricing for individual users ranges from a free tier (Hobby) to an Ultra tier priced at $200 per month, with expanded usage limits for Pro+ and Ultra subscribers. For business users, pricing starts at $40 per user per month for Teams, with enterprise contracts offering custom usage and compliance options tailored to specific organizational needs.
The introduction of Composer signifies more than just a new model; it represents a paradigm shift in how AI can assist in software development. Unlike passive suggestion tools such as GitHub Copilot, which primarily offer recommendations based on user input, Composer is designed for active, collaborative coding. It operates within the same environment as developers, integrating, testing, and improving code in real-time. This collaborative approach fosters a more interactive relationship between human developers and AI agents, paving the way for a future where both can work side-by-side in the same workspace.
As the landscape of AI coding tools continues to evolve, Composer stands out as a pioneering solution that emphasizes speed, reinforcement learning, and seamless integration with live coding workflows. Its unique design and capabilities position it as a frontrunner in the quest for practical, autonomous software development. By focusing on real-world applications and the intricacies of coding environments, Composer offers a glimpse into the future of programming—a future where human developers and AI collaborate more closely than ever before, transforming the way software is created and maintained.
In conclusion, the launch of Composer by Cursor is a significant milestone in the ongoing evolution of AI-assisted programming. With its focus on speed, efficiency, and real-world applicability, Composer is set to redefine the coding experience for developers across the globe. As organizations increasingly seek to leverage AI to enhance productivity and streamline workflows, Composer provides a compelling solution that meets the demands of modern software development. The implications of this technology extend beyond mere coding assistance; they herald a new era of collaboration between humans and machines, where the boundaries of what is possible in software development are continually pushed forward.
