Use Generative AI to Create Video Marketing and Sales Content
Video used to be very expensive and time-consuming to produce. But with the widespread adoption of smartphones that could capture video seamlessly, and an associated drop in viewers’ expectations for high-quality output, the cost and time requirements for video production have declined precipitously.
And now those cost and time requirements have dropped even further thanks to another big factor: generative artificial intelligence, which has really transformed the entire creation process.
In an environment where video continues to be a powerful way to capture attention and gain interest among target audiences of all kinds, that’s good news for companies, which plan to step up their investments in the technology.
In fact, the Content Marketing Institute (CMI) found recently that 61 percent of marketers think their organizations will increase investments in videos, making it the top investment ahead of thought leadership content, at 52 percent.
And use is already on the rise as well. Currently, 42 percent of marketing organizations are using AI to create videos, up from just 18 percent last year, according to CMI.
GenAI isn’t being used just to produce video, though. It’s also being used extensively in preproduction processes, says Chris Lavigne, head of production at video marketing platform provider Wistia. In fact, he says, the planning, scripting, storyboarding, and ideation capabilities of genAI are more mature than purely text-to-video tools.
The CMI report confirms this. “Most marketers primarily use AI in either preproduction (scripting and brainstorming) or postproduction editing (voice dubbing and generating visuals),” it said in a report, also noting that 80 percent of these users “believe AI helps streamline the video production process, enabling faster turnaround times and higher-quality content.”
How It Works
GenAI for video production doesn’t just involve one application (see sidebar). Those tools represent a combination of third-party and onsite, brand-specific tools.
As Quint Boa, founder of creative video agency Synima, explains, text-to-video tools use “diffusion models trained on millions of video clips to generate visuals from prompts,” while voice cloning employs “neural networks trained on voice samples” to “re-create speech patterns, intonation, and emotional cues.”
For lip-sync avatars, the technology combines “speech-to-text with facial landmark animation to sync voice with realistic face movement,” adds Boa, a former actor who once received a British Academy of Film and Television Arts nomination.
Lavigne notes that the technology is evolving rapidly, especially for inanimate objects: “We’re definitely approaching that uncanny valley where you don’t know what’s real and what’s not with things like landscapes, animals, inanimate objects,” he says.
But human expressions remain more challenging, he says. “Humans have been programmed for so long to be able to judge another human’s face and characteristics that we’re not falling for it quite yet.”
Context is also critical, especially in B2B environments, says Krish Mantripragada, chief product officer of Seismic, a sales enablement platform provider. B2B companies should be creating content that is very specific to their company and situation, not generic, he says.
For B2B companies with extensive product portfolios serving multiple geographies and industries, AI must be grounded in enterprise content, Mantripragada maintains. “It’s less about trying to create content from scratch based on a prompt that you’re giving it and more about assembling and customizing the content based on your approved source of truth for that particular situation.”
This approach ensures that AI-generated content adheres to brand tone, company policies, industry regulations, and approved messaging. The technology can also incorporate business context, such as customer information, industry, deal stage, prior interactions, and products being pitched, to create highly relevant video content.
Big Benefits from GenAI
About 10 years ago, when he was editing video for documentaries, Lavigne says he would have needed to send the video out to be transcribed by a human editor and then he’d have a text document through which he’d need to wade to find the best quotes—a very time-consuming process. Today, he says, time codes are automatically linked to transcripts, making any specific moment easy to find. AI can analyze video transcripts and suggest edits automatically.
“We’re one step away from programmatic video editing as well,” Lavigne says. The tools are not quite there yet, he says, but he expects they will be by the end of the year. “To my surprise and chagrin, it’s happening very, very fast.”
Beyond video editing, genAI excels at creating multimodal learning content, Mantripragada adds, noting that people have different learning needs and preferences; some prefer audio, while others prefer video. “Being able to create multimodal content based on learner preferences is another huge advantage of AI, because historically, these were very resource-intensive, from both a capacity and skills perspective,” he says. “Previously, creating audio and video training materials required highly skilled trainers and significant resources.”
GenAI tools can be used to create audio and video lessons, podcasts, and deep-dive study guides, even on-demand, from source content for very specific situations, Mantripragada says.
Users can, for example, interact with AI to ask probing questions, he adds. “And by grounding it with all of your enterprise content, you can ensure that it doesn’t hallucinate or provide generic information which may not be relevant in that particular situation.”
Some Drawbacks Remain
Despite its many benefits, genAI-created video isn’t infallible. As with genAI-created text, some glitches remain. For instance, Lavigne says, genAI video often defies the basic laws of physics.
“If you were to prompt a genAI tool like Runway to ‘show me a ball bouncing down a staircase,’ in your mind you would know what that should look like. What you’d see in the output, though, is that the ball isn’t really adhering to the laws of gravity.”
Another example, Lavigne says, is genAI’s inability to accurately portray human emotion. “A lot of these generative AI videos have no emotion in the eyes. They’re kind of dead-eyed, and they look soulless. If you try to show someone weeping or crying it’s usually overexaggerated.”
Lavigne points out that “AI doesn’t quite know how to create a human image to be believable from all 360 degrees.” So, for instance, “the chin might change from a dead-on shot to when the camera is [at a 90-degree angle]. We’re seeing a lot of face distortions.”
But, he adds, these limitations could be only temporary. “It’s all changing so rapidly that this probably isn’t going to be that big of a deal in the next few months.”
What is a big deal, though, is ensuring the appropriate and accurate use of these tools.
Guardrails for GenAI in Video Production
Establishing guardrails to ensure that genAI tools are used appropriately in video creation is important for ensuring brand integrity and adhering to ethical standards of content creation (e.g., avoiding plagiarism).
One area of transparency that many might not consider, but that Lavigne emphasizes, is notifying the talent. Video producers need to be “crystal clear with the talent” about whether they’re going to use AI. He recommends talent release forms, the use of which became second nature, he says, from early in his career. “Talent release forms need to spell out if you have permission, or if you don’t have permission, to synthesize their likeness,” he says. This could also involve inserting actors into a particular scene that they did not actually visit.
Another critical area of oversight is ensuring human review of the content produced. This helps ensure both accuracy and quality. GenAI tools are still widely known to hallucinate, or generate inaccurate information, and sometimes the image output isn’t quite right.
AI can’t replace “cinematic direction, on-location shooting, or human intuition in storytelling,” Boa says. He also points out that AI can’t guarantee copyright safety, “especially when using AI-generated visuals or voice clones,” so it’s important to have clear policies on copyright checking for all AI-generated assets.
Basic security protocols should also be established. “Store sensitive or proprietary prompts/scripts securely; some models retain user input for training,” Boa points out.
Many organizations are already doing this. CMI’s research indicated that 66 percent of marketing organizations have security measures in place specific to the use of genAI.
Impact on Human Video Producers
Whenever genAI is considered for tasks that were once performed solely by humans, the question is raised: “Will humans be replaced?”
Lavigne doesn’t think so. “I was more afraid of this when it first came out,” he says. Today, “I’m seeing it more as a tool, like Photoshop is a tool for photographers.”
Some video production tasks simply need to be done using genAI to achieve the greatest efficiencies possible, Lavigne says. “If you’re a video producer in 2025 and you’re not using a transcription tool to help you edit talking-head video, you’re not going to be competitive with a video editor that is.”
Video producers simply must understand how to use the tools at their disposal right now, Lavigne says. “Otherwise you’re doing your job with both hands and feet tied behind your back. You need to be able to adapt and learn and understand where this can improve your workflow and where it can’t.”
That said, he adds, video production still requires a “basic understanding of how to tell a story and how to elicit an emotion.”
Mantripragada agrees, emphasizing that AI is empowering rather than replacing content creators. “Clearly, what we are most excited about with the technology is that it allows marketing folks, [sales] enablement folks, content creators, and trainers within the organization to very quickly, and in a cost-effective and scalable manner, to create lots of different formats, more modes, modalities, and content types, either on demand or on schedule,” he says. He points to tasks like creating comprehensive FAQs that previously required highly skilled professionals and significant time.
But, Mantripragada continues, “it’s less about generating more content and more about how I can be smart about what I create and then measure engagement, effectiveness, and conversion so I use AI effectively to generate more of what works.”
Skills Needed for AI Video Production
Mantripragada emphasizes that users of AI video tools don’t need to be experts but should have a basic level of comfort with the technology. Prompt engineering is important, he says, along with understanding how to iteratively work with genAI to produce the output you want.
“You don’t need to be a super expert, but you need to have an appreciation for how to construct a good prompt, to understand what sources AI is pulling the data from, and how to work with AI to fine-tune and generate the content you’re looking for,” Mantripragada says.
Mantripragada adds that content creators will still need to validate the quality of AI-generated content. “You need to be the solution. The skill you need is how to instruct the AI with the right set of prompts and the right set of content sources to have it generate what I’m looking for and, ultimately, relying on my skills and abilities to validate if the piece of content produced is up to the mark.”
Looking ahead, experts predict rapid evolution in the space. Mantripragada describes this as “a very exciting time for us to be in the [video creation] space.” GenAI, he says, “is one of those transformational, once-in-a-lifetime technologies that is revolutionizing every part of our jobs.”
This empowerment of end users, without requiring specialized technical skills, represents the most exciting frontier in AI-powered video and content creation. As Boa, Lavigne, and Mantripragada suggest, the technology is evolving at a pace that continues to surprise even industry veterans.
Linda Pophal is a freelance business journalist and content marketer who writes for various business and trade publications. Pophal does content marketing for Fortune 500 companies, small businesses, and individuals on a wide range of subjects, from human resource management and employee relations to marketing, technology, healthcare industry trends, and more.
GenAI Video Generation Vendors and Tools
There are a wide range of genAI tools available for video production, and more are continually emerging. Boa lists some of the major players in each tool category:
- Editing and postproduction: Runway, Descript, Wisecut, Blackmagic’s Resolve AI.
- Script and voice: ElevenLabs, Murf, and Play.ht for voice; Jasper and ChatGPT for scripts.
- Avatar and lip-sync video: Synthesia, HeyGen, Colossyan.
- Text-to-video: Pika Labs, Runway Gen-2, Sora (OpenAI—not public yet).
- Localization: Deepdub, Papercup, Dubverse.
- Stock and visuals: Canva AI, Shutterstock’s AI, Adobe Firefly.