Video Dubbing API Guide

Video dubbing combines translated audio with lip-synced video so dubbed content looks natural in the target language. Sync handles the lipsync step — you provide the video and translated audio, and the API generates matching lip movements.

Prerequisites

  • A Sync API key
  • A source video (URL or uploaded asset)
  • Translated audio in the target language (from a TTS service or human voice actor)

Install the SDK for your language:

$# Python
$pip install syncsdk
$
$# TypeScript
$npm i @sync.so/sdk

Set your API key:

$export SYNC_API_KEY="your-api-key"

Basic Dubbing Pipeline

1

Prepare your translated audio

Generate translated audio using a text-to-speech service like ElevenLabs, Google Cloud TTS, or Amazon Polly. You can also use a human voice actor. The audio must be hosted at a publicly accessible URL.

If you already have a translated audio file, upload it to your hosting service and grab the URL.

2

Submit to Sync API

Send the source video and translated audio to the Sync API. The API generates new lip movements matching the translated audio.

1import { SyncClient } from "@sync.so/sdk";
2
3const sync = new SyncClient();
4
5// Source video with original language
6const videoUrl = "https://your-cdn.com/original-video.mp4";
7// Translated audio in target language
8const dubbedAudioUrl = "https://your-cdn.com/translated-audio-spanish.wav";
9
10const response = await sync.generations.create({
11 input: [
12 { type: "video", url: videoUrl },
13 { type: "audio", url: dubbedAudioUrl },
14 ],
15 model: "lipsync-2",
16 options: { sync_mode: "cut_off" },
17});
18
19const jobId = response.id;
20console.log(`Dubbing job submitted: ${jobId}`);
3

Poll for completion

Check the generation status until it completes. For production systems, use webhooks instead of polling.

1let generation = await sync.generations.get(jobId);
2while (!["COMPLETED", "FAILED", "REJECTED"].includes(generation.status)) {
3 console.log(`Status: ${generation.status}`);
4 await new Promise((r) => setTimeout(r, 10000));
5 generation = await sync.generations.get(jobId);
6}
7
8if (generation.status === "COMPLETED") {
9 console.log(`Dubbed video ready: ${generation.outputUrl}`);
10} else {
11 console.log(`Dubbing failed for job ${jobId}`);
12}
4

Download the dubbed video

The output_url (Python) or outputUrl (TypeScript) contains a direct link to the dubbed video. Download it or pass it to your delivery pipeline.

Using the ElevenLabs Integration

Sync has a built-in ElevenLabs integration that handles text-to-speech and lipsync in a single API call. Instead of generating audio separately, you pass the translated text directly.

1import { SyncClient } from "@sync.so/sdk";
2
3const sync = new SyncClient();
4
5const response = await sync.generations.create({
6 input: [
7 {
8 type: "video",
9 url: "https://your-cdn.com/original-video.mp4",
10 },
11 {
12 type: "text",
13 provider: {
14 name: "elevenlabs",
15 voiceId: "EXAVITQu4vr4xnSDxMaL",
16 script: "Hola, bienvenidos a nuestra plataforma. Hoy les mostraremos las nuevas funciones.",
17 stability: 0.5,
18 similarityBoost: 0.75,
19 },
20 },
21 ],
22 model: "lipsync-2",
23 options: { sync_mode: "cut_off" },
24});
25
26console.log(`Job ID: ${response.id}`);

The script field has a maximum of 5,000 characters per generation. For longer scripts, split them into segments. See the Integrations page for ElevenLabs setup details.

Supported Languages

Sync’s lipsync models are language-agnostic. They work with audio in any language — the models analyze mouth shapes from the audio waveform, not the language itself. If your translated audio is clear and well-produced, the lipsync output will match.

For a complete translation pipeline walkthrough (transcription, translation, TTS, and lipsync), see the Video Translation API Guide.

Multi-Speaker Dubbing

For videos with multiple speakers, use the segments API to assign different audio tracks to different time ranges. Each segment can reference a separate audio input with a distinct voice.

1from sync import Sync
2from sync.common import Audio, Video
3
4sync = Sync()
5
6response = sync.generations.create(
7 input=[
8 Video(url="https://your-cdn.com/interview.mp4"),
9 Audio(url="https://your-cdn.com/speaker-a-spanish.wav", ref_id="speaker_a"),
10 Audio(url="https://your-cdn.com/speaker-b-spanish.wav", ref_id="speaker_b"),
11 ],
12 segments=[
13 {"startTime": 0, "endTime": 15, "audioInput": {"refId": "speaker_a"}},
14 {"startTime": 15, "endTime": 30, "audioInput": {"refId": "speaker_b"}},
15 ],
16 model="lipsync-2",
17)

See the Segments Guide for full documentation and more examples.

Performance Tips

Use webhooks for production

Replace polling with webhooks for production pipelines. You receive a POST notification when the job completes, eliminating wasted API calls.

Use batch processing for bulk dubbing

Dubbing an entire video library? The Batch API lets you submit up to 500 generations in a single operation with a 24-hour turnaround.

Pick the right model

Use lipsync-2 for most dubbing jobs. Switch to lipsync-2-pro for premium content where detail around beards, teeth, and facial features matters. Use lipsync-1.9.0-beta when speed is the priority.

Match audio duration

Set sync_mode to control what happens when audio and video lengths differ. cut_off trims excess audio. bounce loops the video to match audio length. See sync mode options for details.

Next Steps