news

Breaking the language barrier with powerful AI video dubbing from Sieve

Breaking the language barrier with powerful AI video dubbing from Sieve

Breaking the language barrier with powerful AI video dubbing from Sieve

1/10/25

Sieve is a leading AI video infrastructure company focused on turning models into composable tools any developer can use to understand, manipulate, and generate video seamlessly. The SF-based startup is used by some of the world's leading video platforms and is backed by top AI investors including Matrix Partners, Swift Ventures, Y Combinator, and AI Grant.

Recently, we partnered with Sieve to power their best-in-class AI video dubbing pipeline with sync-1.9.0-beta, the world’s most natural lip syncing model. This partnership will help accelerate us into a world where language is no longer a barrier.

Professional grade dubbing with flawless lipsync

Professional grade dubbing with flawless lipsync

Professional grade dubbing with flawless lipsync

imagine millions gaining access to knowledge, entertainment, and connection regardless of their native tongue. This is the promise AI dubbing holds.

Traditionally, to dub a video means handing content off across dozens of specialists — from professional translators, script adaptation writers, voice actors, casting directors, recording engineers, sound editors, dubbing directors, quality assurance specialists, and more.

During this process translated scripts are adapted to ensure lip-sync compatibility and cultural relevance. If done poorly we get a “bad dub” – an issue anyone anyone raised outside of the western world consuming popular western media knows well.

Below are some examples comparing how videos translated with lipsync drive higher engagement than their non-lipsynced counterparts:

President of Ukraine conversing with Lex Fridman in fluent English (from Ukrainian)

Ukrainian

Lipsynced + Translated:

Popular YouTuber Tanmay Bhat, translated from English into Hindi

English

Lipsynced + Translated:

Visionary investor Masayoshi Son from Softbank translated into his native tongue, Japanese.

English

Lipsynced + Translated:

Sieve's AI dubbing pipeline gives developers access to the highest quality, most flexible API to build dubbing experiences around. Key features include:

  • natural voice cloning, preserving the original speaker's voice in the target language

  • precise, culturally relevant translations

  • natural lipsync automatically applied to the active speakers in a given scene

  • translation styling and custom vocabulary for control over translations

  • output modes and custom transcript inputs for building human-in-the-loop experiences

“sync. has the most natural video-to-video lipsyncing models in the world, and the best part is there’s no training data required to use them. This opens up many possibilities with the types of content our workflows can target, and we’re excited to see what developers create with this new capability.” – Mokshith Voodarla, CEO of Sieve

How we move the world forward, together

How we move the world forward, together

How we move the world forward, together

AI dubbing is simply the first use case we’ve partnered on. The ability to seamlessly edit the recorded word combined with powerful video editing primitives allows us to compose workflows and supercharge content creation across industries. We’re excited to see our partnership deepen with our models powering key workflows across the Sieve ecosystem!

lipsync any content w/ one api. our lipsync works on any video content in the

wild — across movies, podcasts, games, and even animations.

crafting magic in california.

© 2024 synchronicity labs inc.

lipsync any content w/ one api. our lipsync works on any video content in the

wild — across movies, podcasts, games, and even animations.

crafting magic in california.

© 2024 synchronicity labs inc.