LightricksProvider Overview

Lightricks Video Generator

LTX-2.3 is Lightricks' open-source video model family built for sharp detail, fast generation, native audio, and practical production workflows from idea to final edit. The model supports image-to-video, multi-keyframe conditioning, keyframe-based animation, video extension (both forward and backward), video-to-video transformations, and any combination of these features.

Explore Lightricks's Models

Jump straight into the exact model page you want to compare, test, or use for generation.

Lightricks's Feature Offerings

Common strengths surfaced across this provider's most relevant model families.

Text To VideoImage To VideoAudio To VideoExtend VideoRetake VideoNative AudioOpen Source

New VAE Architecture

Sharper Fine Details Than Ever

LTX-2.3 introduces a new VAE that produces noticeably sharper output. Textures, facial features, and small objects retain clarity across the full frame. The improvement is especially visible at higher resolutions where previous versions softened details.

Native Audio

Cleaner Sound, Built In

Generate audio natively alongside video with improved clarity in 2.3. Sound effects, ambient noise, and dialogue are synchronized from generation. A dedicated audio-to-video endpoint lets you provide an audio clip and generate matching visuals.

Flexible Workflows

Every Mode You Need

Text-to-video, image-to-video, audio-to-video, extend, and retake. Fast variants for text-to-video and image-to-video when speed matters. Portrait 9:16 support, 24/48 FPS options, and LoRA fine-tuning across the board.

How to Use Lightricks on skills.video

01

Pick an LTX workflow

Choose text/image generation, audio-to-video, extend-video, or retake-video based on the production stage.

02

Submit prompt and media

Provide prompt plus optional image/audio/video inputs, then set duration, ratio, and quality-related parameters.

03

Iterate to final cut

Use extend and retake passes to refine continuity and edits until the output matches your target scene.

Video Models

Browse all Lightricks video models in one place, including text-to-video and image-to-video options.

4 models

FAQs

Common questions about Lightricks models and workflows.

What is LTX-Video?
LTX-Video is Lightricks' open-source DiT-based video generation project. The official repository describes broad generation and control workflows in a single model family.
What is the latest major update in the official repo?
On October 23, 2025, Lightricks announced LTX-2 and stated that it is the primary home for ongoing LTX development.
What workflows are supported?
The README lists image-to-video, multi-keyframe conditioning, keyframe-based animation, forward/backward video extension, and video-to-video transformations.
Which model variants are listed?
The repository highlights 13B and 2B variants, including dev, distilled, and FP8 checkpoints such as ltxv-13b-0.9.8-dev and ltxv-2b-0.9.8-distilled.
Can I use LTX with ComfyUI and Diffusers?
Yes. The official README links to ComfyUI-LTXVideo workflows and Diffusers integration for LTX-Video pipelines.
Does LTX support training or LoRA fine-tuning?
Yes. Lightricks provides the LTX-Video-Trainer repository, including full fine-tuning and LoRA workflows for control and effect models.
What should I know about licensing and commercial use?
The LTX-Video code repository is Apache-2.0. Model checkpoint usage can have separate terms; the README history also notes an OpenRail-M commercial-use license update for v0.9.5 checkpoints.