Luma, a startup specializing in AI-driven video generation, announced the launch of a new autonomous system on Thursday known as Luma Agents. This platform is engineered to manage the entire spectrum of creative production, encompassing text, imagery, video, and audio generation. These agents operate on the company's newly developed Unified Intelligence model family, which utilizes an architecture trained on a singular, multimodal reasoning system.
The company is positioning Luma Agents not merely as a supplementary tool, but as a fundamental shift in operational workflows for advertising agencies, enterprise marketing departments, and design studios. According to the announcement, these agents possess the capability to plan and execute complex creative tasks while orchestrating outputs from various other artificial intelligence models. The system integrates with external technologies, including Google’s Veo 3 and Nano Banana Pro, ByteDance’s Seedream, and voice synthesis models from ElevenLabs, alongside Luma’s own Ray 3.14.
At the core of this new offering lies Uni-1, the inaugural model within the Unified Intelligence family. Amit Jain, Luma’s co-founder and chief executive officer, stated that this model has been trained across a diverse array of data types, including language, audio, video, images, and spatial reasoning data.
Jain described the Uni-1 model as having the capacity to "think in language" while possessing the ability to "imagine and render in pixels," a concept the company refers to as "intelligence in pixels." While the current iteration focuses heavily on visual and textual reasoning, Jain noted that direct output capabilities for audio and video formats are scheduled for future model updates. He emphasized that customers adopting this technology are effectively restructuring their business processes rather than simply purchasing software.
Early Adoption and Agentic Capabilities
Deployment of the agentic platform is already underway with several high-profile early adopters. The client roster includes major global advertising networks such as Publicis Groupe and Serviceplan, as well as multinational brands like Adidas and Mazda. Additionally, the Saudi artificial intelligence firm Humain has begun utilizing the system.
According to Jain, the primary differentiator for Luma Agents is their ability to maintain persistent context throughout the creative lifecycle. Unlike standard generative tools that treat each prompt in isolation, these agents can track assets, collaborators, and iterations over time. Furthermore, the system is designed to critique and refine its own output.
"You need that ability to evaluate your work, fix it, and do that loop until the solution is good and accurate," Jain explained, drawing a parallel to the self-correction loops that have made coding agents effective in software development.
The executive highlighted a significant friction point in current creative AI workflows, where users are often presented with hundreds of disparate models and forced to learn complex prompting techniques to achieve desired results. Luma Agents aim to eliminate this back-and-forth dynamic. Instead of requiring a user to prompt for every specific change, the system generates broad sets of variations, allowing human operators to steer the creative direction through natural conversation.
Jain noted that because the Unified Intelligence models possess genuine understanding alongside their generative capabilities, they can facilitate end-to-end project management.
Simulating the Creative Process
To illustrate the underlying philosophy, Jain compared the system’s operation to a human architect designing a structure. Just as an architect mentally visualizes light, spatial dynamics, and structural integrity while drawing lines, the Unified Intelligence models are built to maintain an internal representation of the subject matter.
The company asserts that this approach dramatically accelerates production timelines. During a demonstration, the system processed a brief consisting of 200 words and a single product image of a lipstick tube. From these inputs, it successfully generated a variety of concepts for an advertising campaign, including suggestions for color palettes, model casting, and shooting locations.
In a more quantifiable case study, Luma reported that its agents were used to transform a brand's year-long advertising campaign-originally valued at $15 million-into multiple localized advertisements tailored for different international markets. The system completed this task in approximately 40 hours at a cost of under $20,000. Jain confirmed that the output successfully passed the brand's internal requirements for accuracy and quality control.
Luma Agents has been made publicly available through an API. However, the company plans to manage the rollout gradually to ensure platform stability and prevent workflow disruptions for its growing user base.



