Lightricks bets on open-source AI video to challenge Big Tech
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Lightricks, the Israeli company behind the viral photo-editing app Facetune, is launching an ambitious effort to shake up the generative AI landscape. Today, the company announced the release of LTX Video (LTXV), an open-source AI model capable of generating five seconds of high-quality video in just four seconds. By making its video model freely available, Lightricks is directly aiming at the growing dominance of proprietary AI systems from tech giants like OpenAI, Adobe and Google.
“We believe foundational models are going to be a commodity, and you can’t build an actual business around foundational models,” said Zeev Farbman, co-founder and CEO of Lightricks, in an exclusive interview with VentureBeat. “If startups want to have a serious chance to compete, the technology needs to be open, and you want to make sure that people in the top universities across the world have access to your model and add capabilities on top of it.”
With real-time processing, scalability for long-form video, and a compact architecture that runs efficiently even on consumer-grade hardware, LTXV is poised to make professional-grade generative video technology accessible to a broader audience—an approach that could disrupt the industry’s status quo.
How Lightricks weaponizes open source to challenge AI giants
Lightricks’ decision to release LTXV as open source is a calculated gamble designed to differentiate the company in an increasingly crowded generative AI market. With its two billion parameters, the model is designed to run efficiently on widely available GPUs, such as the Nvidia RTX 4090, while maintaining high visual fidelity and motion consistency.
This move comes at a time when many leading AI models—from OpenAI’s DALL-E to Google’s Imagen—are locked behind APIs, requiring developers to pay for access. Lightricks, by contrast, is betting that openness will foster innovation and adoption.
Farbman compared LTXV’s launch to Meta’s release of its open-source Llama language models, which quickly gained traction in the AI community and helped Meta establish itself in a space dominated by OpenAI’s ChatGPT. “The business rationale is that if the community adopts it, if people in academia adopt it, we as a company are going to benefit a ton from it,” Farbman said.
Unlike Meta, which controls the infrastructure its models run on, Lightricks is focusing solely on the model itself, working with platforms like Hugging Face to make it accessible. “We’re not going to make any money out of this model at the moment,” Farbman emphasized. “Some people are going to deploy it locally on their hardware, like a gaming PC. It’s all about adoption.”
Lightning-fast AI video: Breaking speed records on consumer hardware
LTXV’s standout feature is its speed. The model can generate five seconds of video—121 frames at 768×512 resolution—in just four seconds on Nvidia’s H100 GPUs. Even on consumer-grade hardware, such as the RTX 4090, LTXV delivers near-real-time performance, making it one of the fastest models of its kind.
This speed is achieved without compromising quality. The model’s Diffusion Transformer architecture ensures smooth motion and structural consistency between frames, addressing a key limitation of earlier video-generation models. For smaller studios, independent creators, and researchers, the ability to iterate quickly and generate high-quality results on affordable hardware is a game-changer.
“When you’re waiting a couple of minutes to get a result, it’s a terrible user experience,” Farbman said. “But once you’re getting feedback quickly, you can experiment and iterate faster. You develop a mental model of what the system can do, and that unlocks creativity.”
Lightricks has also designed LTXV to support longer-form video production, offering creators greater flexibility and control. This scalability, combined with its rapid processing times, opens up new possibilities for industries ranging from gaming to e-commerce.
In gaming, for example, LTXV could be used to upscale graphics in older games, transforming them into visually stunning experiences. In e-commerce, the model’s speed and efficiency could enable businesses to create thousands of ad variations for targeted A/B testing.
“Imagine casting an actor—real or virtual—and tweaking the visuals in real time to find the best creative for a specific audience,” Farbman said.
From photo app to AI powerhouse: Lightricks’ bold market play
With LTXV, Lightricks is positioning itself as a disruptor in an industry increasingly dominated by a handful of tech giants. This is a bold move for a company that started as a mobile app maker and is best known for Facetune, a consumer photo-editing app that became a global hit.
Lightricks has since expanded its offerings, acquiring the Chicago-based influencer marketing platform Popular Pays and launching LTX Studio, an AI-driven storytelling platform aimed at professional creators. The integration of LTXV into LTX Studio is expected to enhance the platform’s capabilities, allowing users to generate longer, more dynamic videos with greater speed and precision.
But Lightricks faces significant challenges. Competing against industry heavyweights like Adobe and Autodesk, which have deeper pockets and established user bases, won’t be easy. Adobe, for example, has already integrated generative AI into its Creative Cloud suite, giving it a natural advantage among professional users.
Farbman acknowledges the risks but believes that open-source innovation is the only viable path forward for smaller players. “If you want to have a fighting chance as a startup versus the giants, you need to ensure the technology is open and adopted by academia and the broader community,” he said.
Why open source could win the AI video generation race
The release of LTXV also highlights a growing tension in the AI industry between open-source and proprietary approaches. While closed models offer companies tighter control and monetization opportunities, they risk alienating developers and researchers who lack access to cutting-edge tools.
“Part of what’s going on at the moment is that diffusion models are becoming an alternative paradigm to classical ways of doing things in computer graphics,” Farbman explained. “But if you actually want to build alternatives, APIs are definitely not enough. You need to give people—academia, industry, enthusiasts—models to tinker with and create amazing new ideas.”
Lightricks plans to release LTXV on both GitHub and Hugging Face, with an initial “community preview” phase to allow for testing and feedback. The model will eventually be released under an OpenRAIL license, ensuring that derivatives remain open for academic and commercial use.
For Lightricks, the stakes are high. The company is betting not only on the success of LTXV but also on the broader adoption of open AI models in a field increasingly dominated by closed ecosystems.
“The future of open models is bright,” Farbman said confidently.
Whether that vision comes to fruition remains to be seen. But by making its most advanced technology freely available, Lightricks is sending a clear message: in the race to define the future of AI video, openness and collaboration may be the ultimate competitive advantage.