Spectral Compute Raises $6 Million to Free CUDA Code from GPU Vendor Lock-In
London-based startup Spectral Compute has raised $6 million in seed funding to expand SCALE, its software framework that lets engineers run CUDA applications on non-NVIDIA GPUs, breaking one of the most entrenched barriers in high-performance computing.
The GPU Bottleneck in AI and HPC
Over the last decade, the explosion of artificial intelligence and high-performance computing (HPC) has created an overwhelming dependency on one ecosystem: NVIDIA’s CUDA. While CUDA remains the gold standard for GPU programming, it has also become a constraint, locking research labs, startups and enterprises into one vendor’s hardware and pricing.
As demand for GPUs continues to surge across AI, scientific computing and data analytics, this dependency has become a bottleneck for innovation. Many institutions simply can’t access enough GPUs or diversify their infrastructure due to CUDA’s proprietary nature.
That’s where Spectral Compute steps in.
Meet Spectral Compute and Its SCALE Framework
Founded in London, Spectral Compute is tackling a deeply technical problem: bringing portability to GPU computing. Its flagship framework, SCALE, is a compiler and development toolkit that allows CUDA-based software to run on other hardware platforms (beginning with AMD GPUs) without modification or performance loss.
In essence, SCALE gives developers what CPUs have long enjoyed: “write once, recompile anywhere.” As the company describes it: “Spectral brings the paradigm of CPU programming over to HPC computing platforms: write once, recompile for different hardware.” The SCALE framework includes A compiler capable of translating CUDA into AMD machine code, CUDA language extensions to maintain compatibility with existing codebases and A set of open-source libraries that replicate CUDA-X APIs like cuBLAS and cuSOLVER.
This design enables engineers, data scientists and AI researchers to recompile existing CUDA code and run it natively on AMD GPUs, with no intrinsic performance overhead. That means companies can choose the hardware platform that fits their workload and budget, rather than being forced into a single ecosystem.

Funding Round of Spectral Compute & What It Signals ?
Spectral Compute’s $6 million seed round was led by Costanoa Ventures, with participation from Crucible Ventures and several angel investors. The funds will be used to accelerate SCALE’s development, expand its engineering team and bring the framework to production environments across industries that rely heavily on GPU computing.
In its announcement, the company framed the raise as validation for a growing demand for open GPU ecosystems, especially as cloud providers, hyperscalers and research institutions seek more choice amid global hardware shortages. The investment also highlights how developer tools are becoming strategic assets in the AI infrastructure race.
While massive chipmakers dominate hardware, smaller software innovators like Spectral are targeting the layer that sits between, the compilers and toolchains that determine how and where AI workloads can run.
How SCALE Works? “Bridging CUDA & Multi-Vendor Hardware”
At its core, SCALE functions as a superset of CUDA, meaning it supports the CUDA programming language developers already use, while adding translation and compatibility layers that map CUDA calls to alternative architectures.
When developers compile code with SCALE, the framework’s compiler generates machine code optimized for AMD GPUs or other supported hardware targets. Because it doesn’t rely on emulation or virtualization, there’s no artificial performance loss, making it a realistic alternative for production workloads.SCALE’s ecosystem also includes open-source versions of NVIDIA’s proprietary CUDA-X libraries, covering key operations like linear algebra (cuBLAS), solvers (cuSOLVER) and other GPU-accelerated primitives.
For developers, this means less rewriting and more portability. For enterprises, it means avoiding vendor lock-in without sacrificing the CUDA ecosystem they’ve already built on.
The company’s long-term goal is to make SCALE the unifying layer across GPU platforms, not only AMD and NVIDIA, but potentially future architectures from Intel, Qualcomm or RISC-V-based accelerators.
Market Implications: From Nvidia Dependence to Hardware Flexibility
Spectral’s emergence comes at a critical time for the computing world. Global shortages of NVIDIA GPUs have rippled through the AI industry, delaying model training, increasing costs and leaving smaller players at a disadvantage. By making CUDA applications portable, SCALE could unlock massive efficiency gains, allowing cloud providers, research institutions and startups to shift workloads between hardware providers based on availability and cost.
The implications go beyond economics. For governments and universities, it offers sovereign computing flexibility and the ability to run advanced AI workloads on locally available hardware without being bound to a single vendor or export-controlled technology. For the broader ecosystem, it signals the beginning of true software-defined hardware freedom, where the value shifts from silicon to the compilers and frameworks that orchestrate it.
What’s next for Spectral Compute? Development, Expansion, Ecosystem
With its seed round closed, Spectral Compute is now focused on:
- Optimizing SCALE for broader use cases across AI, HPC and scientific research
- Expanding compatibility beyond AMD to other architectures
- Building a developer community around SCALE-Lang
- Strengthening collaborations with universities and open-source contributors
The company also plans to open-source key parts of SCALE’s library stack, encouraging external developers to contribute new integrations and performance optimizations. While the technology is still early in its journey, Spectral Compute’s momentum positions it at the heart of one of the most urgent conversations in computing: how to scale AI infrastructure sustainably, affordably and independently.
If successful, Spectral could do for GPU programming what Linux once did for operating systems: enabling portability, transparency and choice across a previously closed ecosystem.

