Actions22
- Runway Actions
- Minimax Actions
- Midjourney Actions
Overview
The node integrates with the RunwayML platform via the UseAPI service to perform lip syncing on media assets. It supports various input types combining images or videos with either audio files or AI-generated voice text. This node is useful for automating the creation of lip-synced videos or animations, where a static image or video is animated to match spoken audio or synthesized speech.
Common scenarios include:
- Creating talking head videos from a single image and an audio clip.
- Generating lip-synced videos using AI-generated voice text instead of pre-recorded audio.
- Enhancing video content by synchronizing lip movements with new audio tracks or synthetic voices.
Practical example:
- A content creator wants to produce a short promotional video featuring a character speaking a scripted message. They provide a portrait image and either an audio recording or text to be converted into speech. The node processes these inputs to generate a lip-synced video output.
Properties
| Name | Meaning |
|---|---|
| Input Type | Selects the type of input combination for lip sync: • Image + Audio • Image + Voice Text • Video + Audio • Video + Voice Text |
| Image Asset ID | Asset ID of the image used for lip syncing (required if input type includes an image). |
| Video Asset ID | Asset ID of the video used for lip syncing (required if input type includes a video). |
| Audio Asset ID | Asset ID of the audio file used for lip syncing (required if input type includes audio). |
| Voice ID | ID of the AI voice to use for generating audio when using voice text input. Users can retrieve available voices via a separate API call. |
| Voice Text | Text string to be read aloud by the AI voice and used for lip syncing (required when using voice text input). |
| Voice Model | AI voice model selection for speech synthesis: • Eleven Multilingual v1 • Eleven Multilingual v2 (supports 28+ languages) |
| Additional Options | Collection of optional settings: • Explore Mode: Enables a special mode that requires a specific plan and does not consume credits. • Reply URL: Webhook URL to receive generation completion notifications. • Reply Reference: Custom reference included in callbacks. • Max Jobs: Limits the number of parallel jobs (1-10). |
Output
The node outputs JSON data representing the response from the RunwayML lipsync API endpoint. This typically includes metadata about the created lip sync job, such as job IDs, status, and possibly URLs or asset IDs for the generated media.
If the operation succeeds, the output JSON contains details necessary to track or retrieve the resulting lip-synced media asset.
The node does not directly output binary data; instead, it provides references to assets managed by RunwayML.
Dependencies
- Requires an API key credential for UseAPI with access to RunwayML services.
- Network connectivity to the UseAPI endpoints (
https://api.useapi.net/v1/runwayml/lipsync/create). - Optional webhook URL support for asynchronous notification of job completion.
- For voice text input, access to supported AI voice models provided by RunwayML through UseAPI.
Troubleshooting
- Missing Required Parameters: Ensure all required fields for the selected input type are provided (e.g., image_assetId for image inputs, audio_assetId for audio inputs, voice_text and voiceId for voice text inputs).
- Invalid Asset IDs: Using incorrect or non-existent asset IDs will cause API errors. Verify asset IDs exist and are accessible.
- API Authentication Errors: Confirm the API key credential is valid and has permissions for RunwayML operations.
- Explore Mode Access: Enabling Explore Mode requires a specific subscription plan; attempting to use it without proper access may result in authorization errors.
- Webhook Failures: If a reply URL is set but unreachable or misconfigured, notifications may fail silently or cause retries.
- Max Jobs Limit: Setting maxJobs outside the allowed range (1-10) may cause validation errors.
Common error messages usually relate to missing parameters, invalid credentials, or API request failures. Reviewing the error details returned in the node output helps identify the root cause.
Links and References
- RunwayML Official Website
- UseAPI Documentation
- RunwayML Lip Sync API Reference
- Eleven Labs Voice Models (for voice model details)