CagliostroLab
Models
Animagine XL 3.0
Animagine XL 3.0 is the latest version of the sophisticated open-source anime text-to-image model, building upon the capabilities of its predecessor, Animagine XL 2.0. Developed based on Stable Diffusion XL, this iteration boasts superior image generation with notable improvements in hand anatomy, efficient tag ordering, and enhanced knowledge about anime concepts. Unlike the previous iteration, we focused to make the model learn concepts rather than aesthetic.
Animagine XL 2.0 (Legacy)
Animagine XL 2.0 is an advanced latent text-to-image diffusion model designed to create high-resolution, detailed anime images. It's fine-tuned from Stable Diffusion XL 1.0 using a high-quality anime-style image dataset. This model, an upgrade from Animagine XL 1.0, excels in capturing the diverse and distinct styles of anime art, offering improved image quality and aesthetics.
About CagliostroLab
Linaqruf
(Leader)
Bell-Fi
(Chief, Dev)
Damar Jati
(Dev)
Asahina
(Chief, Dev)
Sugeng.cpp
(Dev)
NekoFi
(Dev)
Scipius2121
(Dataset Curator)
Raelina
(Dev)
NnA_KanA
(Dev)
News
Animagine XL 3.0 Released
2024-01-10
Two months ago, we announced Animagine XL 2.0. Today, we are happy to introduce you to Animagine XL 3.0, the next iteration of our opinionated open-source anime text-to-image model based on Stable Diffusion XL. Following the last iteration, V3 has been developed and improved in order to become the best open anime image generation model.
Compared to the last iteration, it has better knowledge, better concept, and prompt understanding. It also can generate much better hand anatomy.