Diffusers documentation

LTXVideoTransformer3DModel

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.32.1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

LTXVideoTransformer3DModel

A Diffusion Transformer model for 3D data from LTX was introduced by Lightricks.

The model can be loaded with the following code snippet.

from diffusers import LTXVideoTransformer3DModel

transformer = LTXVideoTransformer3DModel.from_pretrained("Lightricks/LTX-Video", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")

LTXVideoTransformer3DModel

class diffusers.LTXVideoTransformer3DModel

< >

( in_channels: int = 128 out_channels: int = 128 patch_size: int = 1 patch_size_t: int = 1 num_attention_heads: int = 32 attention_head_dim: int = 64 cross_attention_dim: int = 2048 num_layers: int = 28 activation_fn: str = 'gelu-approximate' qk_norm: str = 'rms_norm_across_heads' norm_elementwise_affine: bool = False norm_eps: float = 1e-06 caption_channels: int = 4096 attention_bias: bool = True attention_out_bias: bool = True )

Parameters

  • in_channels (int, defaults to 128) — The number of channels in the input.
  • out_channels (int, defaults to 128) — The number of channels in the output.
  • patch_size (int, defaults to 1) — The size of the spatial patches to use in the patch embedding layer.
  • patch_size_t (int, defaults to 1) — The size of the tmeporal patches to use in the patch embedding layer.
  • num_attention_heads (int, defaults to 32) — The number of heads to use for multi-head attention.
  • attention_head_dim (int, defaults to 64) — The number of channels in each head.
  • cross_attention_dim (int, defaults to 2048 ) — The number of channels for cross attention heads.
  • num_layers (int, defaults to 28) — The number of layers of Transformer blocks to use.
  • activation_fn (str, defaults to "gelu-approximate") — Activation function to use in feed-forward.
  • qk_norm (str, defaults to "rms_norm_across_heads") — The normalization layer to use.

A Transformer model for video-like data used in LTX.

Transformer2DModelOutput

class diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

Parameters

  • sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.

The output of Transformer2DModel.

< > Update on GitHub