FP8 quantized version of AuraFlow v0.3
Just casted to torch.float8_e4m3fn
all linear weights of the flow transformer except t_embedder
, final_linear
, modF
.
Model tree for p1atdev/AuraFlow-v0.3-fp8
Base model
fal/AuraFlow-v0.3FP8 quantized version of AuraFlow v0.3
Just casted to torch.float8_e4m3fn
all linear weights of the flow transformer except t_embedder
, final_linear
, modF
.
Base model
fal/AuraFlow-v0.3