Thinking Machines Launches Tinker, a Powerful API for Accessible Distributed LLM Fine-Tuning

Thinking Machines, the AI startup founded by former OpenAI CTO Mira Murati, has officially launched its first product, Tinker, a Python-based API designed to revolutionize the fine-tuning of large language models (LLMs). This innovative tool aims to make LLM fine-tuning more accessible, customizable, and scalable for developers and researchers alike. Currently in private beta, Tinker allows users to maintain control over their training pipelines while alleviating the burdens associated with distributed computing and infrastructure management.

The launch of Tinker marks a significant milestone for Thinking Machines, which has garnered substantial attention and investment since its inception. Earlier this year, the company raised an impressive $2 billion from prominent investors including Andreessen Horowitz (a16z), NVIDIA, and Accel. This funding positions Thinking Machines as one of the most well-capitalized independent AI startups, reflecting the growing demand for tools that facilitate open and customizable AI development.

Mira Murati, in her announcement on the social media platform X, emphasized Tinker’s mission: “Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines.” This statement encapsulates the essence of Tinker, which is designed not just as another drag-and-drop interface or black-box tuning service, but as a developer-centric API that empowers users to dive deep into the intricacies of model training.

At its core, Tinker provides a low-level yet user-friendly API that grants researchers granular control over various aspects of the training process, including loss functions, training loops, and data workflows—all implemented in standard Python code. This level of control is crucial for researchers who wish to experiment with different algorithms and methodologies without being constrained by the limitations of existing proprietary tools.

One of the standout features of Tinker is its support for both small and large open-weight models, including advanced architectures like Mixture-of-Experts (MoE) models such as Qwen-235B-A22B. This flexibility allows users to tailor their training processes to their specific needs, whether they are working with smaller datasets or tackling complex tasks that require extensive computational resources.

Additionally, Tinker integrates seamlessly with LoRA-based tuning, enabling multiple training jobs to share compute pools. This optimization enhances cost-efficiency, making it easier for researchers to conduct experiments without incurring prohibitive expenses. The inclusion of an open-source companion library, the Tinker Cookbook, further enriches the user experience by providing implementations of various post-training methods, thus facilitating knowledge sharing within the research community.

Before its public debut, Tinker was already being utilized by several prestigious research labs, showcasing its practical applications across diverse domains. Early adopters include teams from Princeton, Stanford, Berkeley, and Redwood Research, each leveraging Tinker to address unique model training challenges.

For instance, Princeton’s Goedel Team successfully fine-tuned LLMs for formal theorem proving using Tinker and LoRA with only 20% of the data typically required. Their model achieved an impressive 88.1% pass rate on the MiniF2F benchmark, surpassing the performance of larger closed models. This achievement underscores Tinker’s potential to democratize access to advanced AI capabilities, allowing smaller teams to compete with larger institutions.

Similarly, the Rotskoff Lab at Stanford employed Tinker to train chemical reasoning models, resulting in a remarkable increase in accuracy on IUPAC-to-formula conversion tasks—from 15% to 50%. Researchers noted that such improvements would have been unattainable without the robust infrastructure support provided by Tinker.

Berkeley’s SkyRL team utilized Tinker to run custom multi-agent reinforcement learning loops, demonstrating the API’s versatility in handling complex training scenarios. Meanwhile, Redwood Research leveraged Tinker to RL-train the Qwen3-32B model on long-context AI control tasks, with researcher Eric Gan highlighting that without Tinker, he might not have pursued the project due to the challenges of scaling multi-node training.

These real-world use cases illustrate Tinker’s adaptability, supporting both classical supervised fine-tuning and highly experimental reinforcement learning pipelines across vastly different domains. The feedback from early users has been overwhelmingly positive, with many praising Tinker’s clean API design and its ability to handle reinforcement learning-specific scenarios such as parallel inference and checkpoint sampling.

The announcement of Tinker has sparked immediate reactions from the AI research community, with notable figures expressing their enthusiasm for the product. Andrej Karpathy, a former co-founder of OpenAI and now head of Eureka Labs, commended Tinker’s design trade-offs. He remarked, “Compared to the more common and existing paradigm of ‘upload your data, we’ll post-train your LLM,’ this is, in my opinion, a more clever place to slice up the complexity of post-training.” His endorsement highlights Tinker’s potential to shift the paradigm of how researchers approach model training.

John Schulman, another former co-founder of OpenAI and current chief scientist at Thinking Machines, described Tinker as “the infrastructure I’ve always wanted.” He referenced a quote attributed to the late British philosopher and mathematician Alfred North Whitehead: “Civilization advances by extending the number of important operations which we can perform without thinking of them.” This sentiment resonates with Tinker’s goal of simplifying the complexities of distributed training while empowering researchers to focus on innovation.

As Tinker continues to gain traction, it is currently available in private beta, with a waitlist sign-up open to developers and research teams. During this beta phase, users can access the platform free of charge, with a usage-based pricing model set to be introduced in the coming weeks. This approach allows organizations to explore Tinker’s capabilities without upfront costs, fostering a collaborative environment for experimentation and development.

For those interested in deeper integration or dedicated support, Thinking Machines encourages inquiries through its website, signaling its commitment to fostering strong relationships with its user base. The company’s emphasis on open science, safety, and multimodal collaboration aligns with its vision of creating a more inclusive and adaptable AI ecosystem.

Thinking Machines distinguishes itself from competitors by focusing on multimodal AI systems that collaborate with users through natural communication rather than striving for fully autonomous agents. This approach reflects a broader trend in the AI landscape, where the emphasis is shifting towards human-AI collaboration and the development of systems that can adapt to individual user needs.

In addition to its product offerings, Thinking Machines has published several research papers on open-source techniques, contributing to the collective knowledge of the machine learning and AI community. This commitment to openness and transparency sets the company apart in an increasingly competitive market, where many organizations are vying for dominance in the AI space.

As the demand for developer mindshare intensifies, Thinking Machines is poised to meet this challenge head-on with Tinker. The API’s technical clarity, robust documentation, and user-centric design position it as a valuable tool for researchers and developers looking to push the boundaries of what is possible with LLMs.

In conclusion, Tinker represents a significant advancement in the field of AI, offering a powerful and accessible solution for distributed LLM fine-tuning. With its developer-centric approach, robust feature set, and commitment to open science, Tinker is set to empower researchers and developers to innovate and explore new frontiers in AI. As the private beta progresses and the platform evolves, the potential for Tinker to reshape the landscape of AI development is immense, paving the way for a future where advanced AI capabilities are within reach for all.