March 10, 2026: A multiyear agreement between NVIDIA and Thinking Machines Lab announced this week will give the startup access to at least one gigawatt of the chipmaker’s next-generation Vera Rubin systems and includes a “significant” financial investment from the supplier; deployment of the hardware is targeted for early next year.
The companies describe the arrangement as a strategic, multiyear partnership under which the hardware supplier will provide rack-scale access to its Vera Rubin platform at fleet scale to support training and operation of frontier models and platforms the startup plans to develop. The announcement frames the contribution as a guaranteed allocation of compute rather than a narrow product sale; the supplier’s blog post gives the gigawatt figure and a deployment timetable.
Media outlets reporting on the same announcement cite a “significant investment” but say financial and contractual terms were not disclosed publicly by companies. Industry coverage places the agreement among the largest single-partner compute commitments to a private AI lab in recent memory.
Thinking Machines Lab’s Background and why the deal matters
Thinking Machines Lab launched publicly in 2025 under the leadership of its founder and CEO, Mira Murati, who previously held senior technical roles at OpenAI. Since launch, the company has rolled out an initial product for model customisation and has rapidly expanded headcount and investor interest. Major outlets reported that the startup has raised $2 billion since its February 2025 founding from investors including Andreessen Horowitz, Accel, Nvidia, and rival chipmaker AMD’s venture arm.
The startup has recently been seeking to raise more in a new funding round that could value it at tens of billions of dollars, sources told Reuters earlier.
For a young lab, guaranteed access to fleet-scale, next-generation accelerators is strategic: it shortens the lead time to train large models, reduces some procurement and supply-chain risk, and provides a technical partnership that can influence system software and optimisation around the supplier’s architectures. Those are practical advantages for a team that says it plans to build “frontier” models rather than only consumable tools.
From the supplier’s perspective, the arrangement deepens commercial ties with a company led by a high-profile founder and a senior engineering team. That offers three concrete returns: (1) long-term revenue and utilisation for expensive hardware pipelines, (2) reference deployments and joint engineering work that can improve hardware-software stack performance, and (3) an expanded ecosystem of customers who build on the supplier’s architecture. Financially, the supplier has increasingly combined equipment sales with minority investments in strategic customers; observers note this amplifies commercial alignment but also raises questions about dependence and competition.
The agreement between Thinking Machines Lab and Nvidia marks two broader trends in the AI infrastructure market. First, frontier model work continues to require a scale of bespoke compute that favours deep relationships with a small set of accelerator suppliers. Second, leading suppliers are increasingly pairing capital and guaranteed capacity with strategic customers, effectively blurring upstream manufacturing, customer finance and product partnerships. That can accelerate product development for well-funded startups, but it also concentrates systemic risk (demand shocks, supplier outages) and raises questions about how smaller developers will secure capacity.
Whether the hardware supplier follows through on the rollout timetable and how it stages allocations across the coming quarters. Any subsequent disclosures around commercial terms, equity stakes or governance arrangements that clarify the financial relationship. Technical evidence of co-engineering: published performance numbers, joint papers, or public benchmarks that show how the startup’s systems run on the vendor’s stack.
The partnership provides the Thinking Machines Lab with a materially accelerated path to large-scale model training and gives the supplier a deeper, higher-visibility customer relationship. Those are immediately valuable, but the arrangement also crystallises the market’s movement toward concentrated compute relationships and closer capital ties between suppliers and users, dynamics that will reshape competitive choices and policy questions in the months ahead.




















