skills.video logo
Lightricks提供方概览

Lightricks 视频 生成器

LTX-2.3 is Lightricks' open-source video model family built for sharp detail, fast generation, native audio, and practical production workflows from idea to final edit. The model supports image-to-video, multi-keyframe conditioning, keyframe-based animation, video extension (both forward and backward), video-to-video transformations, and any combination of these features.

探索 Lightricks 的模型

直接进入你想比较、测试或用于生成的具体模型页面。

Lightricks 的功能亮点

汇总该提供方主要模型系列中的共性优势。

Text To VideoImage To VideoAudio To VideoExtend VideoRetake VideoNative AudioOpen Source

New VAE Architecture

Sharper Fine Details Than Ever

LTX-2.3 introduces a new VAE that produces noticeably sharper output. Textures, facial features, and small objects retain clarity across the full frame. The improvement is especially visible at higher resolutions where previous versions softened details.

Native Audio

Cleaner Sound, Built In

Generate audio natively alongside video with improved clarity in 2.3. Sound effects, ambient noise, and dialogue are synchronized from generation. A dedicated audio-to-video endpoint lets you provide an audio clip and generate matching visuals.

Flexible Workflows

Every Mode You Need

Text-to-video, image-to-video, audio-to-video, extend, and retake. Fast variants for text-to-video and image-to-video when speed matters. Portrait 9:16 support, 24/48 FPS options, and LoRA fine-tuning across the board.

如何在 skills.video 中使用 Lightricks

01

Pick an LTX workflow

Choose text/image generation, audio-to-video, extend-video, or retake-video based on the production stage.

02

Submit prompt and media

Provide prompt plus optional image/audio/video inputs, then set duration, ratio, and quality-related parameters.

03

Iterate to final cut

Use extend and retake passes to refine continuity and edits until the output matches your target scene.

视频模型

在这里集中浏览 Lightricks 的全部视频模型,包括文生视频和图生视频能力。

4 个模型

常见问题

关于 Lightricks 模型和工作流的常见问题。

What is LTX-Video?
LTX-Video is Lightricks' open-source DiT-based video generation project. The official repository describes broad generation and control workflows in a single model family.
What is the latest major update in the official repo?
On October 23, 2025, Lightricks announced LTX-2 and stated that it is the primary home for ongoing LTX development.
What workflows are supported?
The README lists image-to-video, multi-keyframe conditioning, keyframe-based animation, forward/backward video extension, and video-to-video transformations.
Which model variants are listed?
The repository highlights 13B and 2B variants, including dev, distilled, and FP8 checkpoints such as ltxv-13b-0.9.8-dev and ltxv-2b-0.9.8-distilled.
Can I use LTX with ComfyUI and Diffusers?
Yes. The official README links to ComfyUI-LTXVideo workflows and Diffusers integration for LTX-Video pipelines.
Does LTX support training or LoRA fine-tuning?
Yes. Lightricks provides the LTX-Video-Trainer repository, including full fine-tuning and LoRA workflows for control and effect models.
What should I know about licensing and commercial use?
The LTX-Video code repository is Apache-2.0. Model checkpoint usage can have separate terms; the README history also notes an OpenRail-M commercial-use license update for v0.9.5 checkpoints.