Daytona Raises $24M to Build the Computing Layer for AI Agents
As artificial intelligence rapidly evolves from static models into autonomous agents, a fundamental problem is becoming increasingly difficult to ignore. The cloud infrastructure powering today’s digital world was never designed for agents. It was built for predictable, repeatable workloads that run the same way every time. AI agents, by contrast, are exploratory, stateful, and interruptible. They create code, test hypotheses, branch into multiple paths, pause mid-task, resume later, and often fail before succeeding. This growing mismatch between infrastructure and behavior is now driving a new wave of foundational innovation.
At the center of this shift is Daytona, a company that has just raised $24 million in Series A funding to build what it describes as “a computer for every agent.” The round was led by FirstMark, with participation from Pace Capital, Upfront Ventures, e2vc, Darkmode Ventures, and strategic investors including Datadog and Figma Ventures. The funding signals growing conviction that the next phase of AI progress will depend not just on better models, but on entirely new computing primitives. Daytona’s premise is simple but far-reaching: if AI agents are going to execute real work, they need infrastructure built for their unique way of operating.
Why the Cloud Breaks Down for AI Agents?
Traditional cloud platforms assume workloads are stateless, immutable, and designed for production environments. Code is deployed, runs in controlled conditions, and is expected to behave consistently. This model works well for web services and enterprise applications, but it begins to fracture when applied to autonomous agents.
Agents behave more like researchers than servers. They spin up environments on demand, modify code continuously, explore multiple approaches in parallel, and require the ability to pause, rewind, or fork their execution mid-task. They also generate large amounts of untrusted code, often written by language models in real time.
Running this kind of behavior inside conventional cloud environments introduces serious challenges. Shared infrastructure increases security risk. Long-lived exploratory processes are difficult to manage. Snapshotting and branching execution is cumbersome. As agent adoption grows, these issues become structural rather than incidental. Daytona was founded on the belief that agents need their own computing abstraction.
Executing AI-Generated Code Safely Is Becoming a Core Problem
One of the most immediate challenges in the agentic era is execution. AI systems can now generate functional code with minimal human input, but running that code safely is a different matter. Enterprises cannot simply execute AI-written programs in shared production environments without risking data leaks, system instability, or security breaches.
Daytona addresses this problem by providing fully isolated, programmatic sandboxes where AI-generated code can run without exposing the broader system. Each environment is created instantly, isolated by default, and governed through APIs that give developers precise control over execution. This approach treats AI-generated code as untrusted by default, a stance that is increasingly necessary as code generation scales across organizations. Rather than relying on static review processes, Daytona enables dynamic execution with built-in safety boundaries.
Sandboxes as Composable Computers
At the heart of Daytona’s platform is a new abstraction: sandboxes as composable computers. Each sandbox is a fully formed computing environment composed on demand.
These environments include CPU, memory, storage, GPU, networking, and an operating system, all provisioned programmatically. They can be paused, resumed, forked, snapshotted, or destroyed at any point during execution. This flexibility mirrors how agents actually work, exploring multiple paths and revisiting earlier states as they reason through complex tasks.
By making these capabilities accessible through APIs, Daytona allows developers to embed agent-native compute directly into their workflows. Instead of forcing agents to adapt to infrastructure constraints, the infrastructure adapts to the agent.
From Tooling to Infrastructure
While Daytona integrates seamlessly with developer workflows through native Git and Language Server Protocol support, the company is positioning itself firmly as infrastructure rather than tooling. Its platform is designed to sit beneath agent frameworks, orchestrators, and applications, providing the execution layer they rely on.
This distinction matters. Tools can be swapped. Infrastructure, once adopted, becomes foundational. As more companies deploy agents into production environments, the need for consistent, secure, and agent-native compute will only intensify.
Daytona’s growing customer base, spanning startups, enterprises, and AI-native teams, suggests that this need is already materializing. For many builders, the platform enables faster experimentation without compromising control or security.
Why Are Enterprises Paying Attention?
Enterprises face a particularly acute version of the agent execution problem. While startups may tolerate risk during early experimentation, large organizations require strict controls around data access, execution environments, and system integrity.
Daytona’s emphasis on isolation, programmatic governance, and reproducibility aligns closely with enterprise requirements. By allowing organizations to define clear boundaries around what agents can execute and where, the platform offers a path toward deploying agents responsibly at scale.
This is especially relevant as AI systems begin to interact with sensitive internal systems, proprietary data, and production infrastructure. Without agent-native compute, many organizations will be forced to limit agent capabilities, slowing adoption.
Daytona’s Series A and What It Enables
The $24 million Series A funding will be used to expand Daytona’s infrastructure platform, deepen enterprise-grade capabilities, and scale support for increasingly complex agent workloads. The company plans to continue investing in its sandbox architecture while building additional primitives tailored to agent behavior.
Investor participation from infrastructure-focused firms and strategic backers reflects a shared belief that the agentic era will require foundational changes to how compute is provisioned and managed. Rather than optimizing existing cloud models, Daytona is attempting to define a new one from first principles.

A New Compute Paradigm Emerging
The rise of AI agents is forcing the technology industry to revisit assumptions that have held for more than a decade. Just as containers and serverless architectures reshaped computing in response to cloud-native applications, agent-native systems are now driving a new wave of infrastructure innovation.
Daytona’s vision suggests a future where compute is no longer designed primarily for humans or static services, but for autonomous systems that explore, reason, and act continuously. In that future, the ability to create, modify, and discard computing environments dynamically becomes a core requirement rather than a niche feature. By focusing on this problem early, Daytona is positioning itself at the foundation of what could become the default execution layer for agent-driven software.
Why Does This Moment Matters?
The agentic era is still in its early stages, but the direction is clear. AI systems are taking on more responsibility, more autonomy, and more real-world impact. Infrastructure that cannot support this shift will quickly become a bottleneck.
Daytona’s bet is that the next generation of computing will be defined by flexibility, isolation, and programmability designed around autonomous behavior. If that bet proves correct, the company may play a role similar to early cloud infrastructure providers, quietly enabling an entire ecosystem to emerge.
Daytona is addressing one of the most fundamental challenges emerging in the age of autonomous AI: execution. As agents become more capable, the systems that run their code must evolve just as quickly. By treating compute as something that can be created, paused, forked, and destroyed on demand, Daytona is helping define what agent-native infrastructure looks like. This shift may prove as important to the future of AI as cloud computing was to the modern internet.

