Singapore’s Video Rebirth Raises $50 Million to Redefine AI Video for the Studio Era
In a global race to make generative video as believable as film, Singapore-based startup Video Rebirth has entered the spotlight with a bold claim: AI-generated video should no longer look synthetic. The company has raised US $50 million in new funding to build what it calls “studio-grade generative video”, moving beyond consumer-grade text-to-video tools toward a professional production platform built for filmmakers, advertisers and digital storytellers.
The round was backed by a mix of global strategic and financial investors and will fund development of the company’s proprietary Physics Native Attention architecture, a system designed to make generated motion, lighting and texture behave as they would in the real world.
Led by founder Dr Wei Liu, Video Rebirth’s mission is clear: to make AI a creative partner, not a gimmick, in the next era of cinematic storytelling.
From Prompt to Production: Generative Video Grows Up
Over the last two years, generative video has exploded in visibility, from short clips produced by Runway ML or Pika Labs to text-driven previews from OpenAI’s Sora. But as dazzling as these demos are, the industry faces a persistent problem: fidelity.
Today’s models can compose scenes, but they struggle with physics: shadows that slip, hands that glitch, motion that betrays the synthetic. For social-media snippets, that may pass. For advertising, film and broadcast, it’s a dealbreaker.
That’s the opportunity Video Rebirth wants to seize. Its platform promises frame-to-frame consistency, physical realism and camera-grade depth control, features that studios have long demanded but few AI systems can deliver. “Our goal isn’t to make viral clips,” Dr Liu said at the company’s Singapore launch event. “It’s to make a production-ready video that meets professional standards of lighting, motion and continuity.”
The Technology: Physics Native Attention and the Pursuit of Realism
At the heart of Video Rebirth’s platform is an architectural breakthrough the team calls Physics Native Attention (PNA). Where typical diffusion or transformer models rely on static spatial attention to estimate pixel relationships, PNA introduces temporal and physical awareness into the rendering process. The model simulates how light interacts with surfaces, how momentum influences movement and how depth changes affect texture and shadow.
In essence, it teaches the network the rules of the real world before it learns to draw it.
This hybrid system merges traditional deep-learning attention layers with physics-based differentiable rendering. The result is video that not only looks real but also obeys the laws of reality: light bounces correctly, gravity behaves predictably and objects don’t morph mid-motion. For creators, this could mean far less post-production cleanup. For industries like virtual production and advertising, it could cut render times and budgets dramatically.
Reimagining the Creative Workflow: From Text Prompts to Full Scenes
Beyond realism, Video Rebirth is chasing control, the missing ingredient in most consumer-grade AI tools. The platform allows users to upload scripts, audio or storyboard elements and map them onto visual sequences. Rather than one-off text prompts, creators can iterate scene by scene, adjusting lighting, shot angle and camera motion through natural-language directives or visual references.
For studios, this means pre-visualization that looks like final footage. For small creators, it’s access to tools once reserved for million-dollar productions. Video Rebirth positions itself not as a replacement for creative teams, but as a co-director: an AI that handles the technical grind while artists focus on vision and emotion.
Why Singapore? And why now ?
That this innovation is emerging from Singapore is no coincidence. The city-state has become one of Asia’s fastest-rising AI and media-tech hubs, with strong government backing for generative technologies and digital-content innovation.
Video Rebirth benefits from this ecosystem: access to cloud infrastructure, funding incentives, and a growing base of film and animation studios eager to test next-generation workflows. Dr Liu’s team spans research talent from Nanyang Technological University, Tsinghua and major visual-effects studios. Its multicultural foundation reflects the company’s ambition to bridge East and West in creative AI.
Market Potential: The Race for Studio-Grade AI Video
The timing couldn’t be better. The global generative-AI video market is projected to exceed US $2 billion by 2027, driven by demand from advertising, entertainment and e-commerce.
Yet most available tools remain limited in frame length, consistency or IP safety. Production-ready AI video remains a largely unsolved challenge: one where authenticity, control and resolution all collide. Video Rebirth’s focus on studio-grade fidelity and IP compliance positions it squarely in the professional tier, competing less with consumer apps and more with specialized virtual-production pipelines. If successful, it could become the “Adobe of generative video”, integrating AI into the creative process without sacrificing quality or control.
$50 Million Funding and the Roadmap Ahead
The newly raised US $50 million will fuel several initiatives:
- Building Version 1.0 of the Video Rebirth Studio platform (launching December 2025).
- Scaling its proprietary model architecture across multi-GPU clusters for higher-resolution outputs.
- Expanding its Singapore R&D hub and establishing partnerships with regional film and advertising studios.
The company is also exploring API access for developers, allowing creative-software vendors to embed its generation engine into existing production tools.
Challenges: Copyright, Dataset Ethics and the Human Factor
For all its promise, generative video remains fraught with complexity. Questions about data provenance, content ownership and deepfake misuse loom large. Video Rebirth says it is addressing these concerns head-on: the company trains on licensed or synthetic datasets and includes watermarking and traceability layers to verify authenticity.
Still, scaling such safeguards globally across creative industries with differing standards will require transparency and trust. And then there’s the cultural hurdle: convincing filmmakers that an algorithm can serve their art rather than dilute it.
The Future of Generative Video: From Novelty to Necessity
If text transformed how we write and image models redefined how we illustrate, video is the final frontier: the medium that blends narrative, motion and emotion. Video Rebirth’s bet is that within a few years, AI-assisted production will no longer be optional. Studios will expect it, advertisers will demand it and audiences won’t notice the difference.
The company’s north star is a world where generative video isn’t just realistic: it’s reliable, ethical and artist-driven. Whether it can reach that destination depends on execution. But the ambition is unmistakable: to make Singapore a creative-AI powerhouse and give the global content industry a new, physics-aware visual language.

