DeepMind’s new AI generates soundtracks and dialogue for videos

Published on:

DeepMind, Google’s AI analysis lab, says it’s growing AI tech to generate soundtracks for movies.

In a put up on its official weblog, DeepMind says that it sees the tech, V2A (quick for “video-to-audio”), as an important piece of the AI-generated media puzzle. Whereas loads of orgs together with DeepMind have developed video-generating AI fashions, these fashions can’t create sound results to sync with the movies that they generate.

“Video era fashions are advancing at an unbelievable tempo, however many present techniques can solely generate silent output,” DeepMind writes. “V2A expertise [could] develop into a promising method for bringing generated films to life.”

- Advertisement -

DeepMind’s V2A tech takes an outline of a soundtrack (e.g. “jellyfish pulsating beneath water, marine life, ocean”) paired with a video to create music, sound results and even dialogue that matches the characters and tone of the video, watermarked by DeepMind’s deepfakes-combatting SynthID expertise. The AI mannequin powering V2A — a diffusion mannequin — was educated on a mix of sounds and dialogue transcripts in addition to video clips, DeepMind says.

“By coaching on video, audio and the extra annotations, our expertise learns to affiliate particular audio occasions with numerous visible scenes, whereas responding to the data offered within the annotations or transcripts,” DeepMind writes.

Mum’s the phrase on whether or not any of the coaching information was copyrighted — and whether or not the information’s creators have been knowledgeable of DeepMind’s work. We’ve reached out to DeepMind for clarification and can replace this put up if we hear again.

AI-powered sound-generating instruments aren’t novel. Startup Stability AI launched one simply final week, and ElevenLabs launched one in Could. Nor are fashions to create video sound results. A Microsoft venture can generate speaking and singing movies from a nonetheless picture, and platforms like Pika and GenreX have educated fashions to take a video and make a finest guess at what music or results are applicable in a given scene.

- Advertisement -
See also  How to subscribe to ChatGPT Plus (and 5 reasons why you should)

However DeepMind claims that its V2A tech is exclusive in that it could perceive the uncooked pixels from a video and sync generated sounds with the video robotically, optionally sans description.

V2A isn’t excellent — and DeepMind acknowledges this. As a result of the underlying mannequin wasn’t educated on a number of movies with artifacts or distortions, it doesn’t create significantly high-quality audio for these. And on the whole, the generated audio isn’t tremendous convincing; my colleague Natasha Lomas described it as “a smorgasbord of stereotypical sounds,” and I can’t say I disagree.

For these causes — and to stop misuse — DeepMind says it gained’t launch the tech to the general public anytime quickly, if ever.

“To ensure our V2A expertise can have a optimistic impression on the inventive neighborhood, we’re gathering various views and insights from main creators and filmmakers, and utilizing this invaluable suggestions to tell our ongoing analysis and growth,” DeepMind writes. “Earlier than we take into account opening entry to it to the broader public, our V2A expertise will bear rigorous security assessments and testing.”

DeepMind pitches its V2A expertise as an particularly great tool for archivists and people working with historic footage. However, as I wrote in a bit this morning, generative AI alongside these traces additionally threatens to upend the movie and TV trade. It’ll take some critically sturdy labor protections to make sure that generative media instruments don’t get rid of jobs — or, because the case could also be, total professions.

- Advertisment -

Related

- Advertisment -

Leave a Reply

Please enter your comment!
Please enter your name here