avatar

Finegrain Blog

Faster diffusion in Refiners with LCM and SDXL Lightning

Refiners 0.4 adds support for Latent Consistency Models and SDXL Lightning, two approaches based on distillation to generate images using Stable Diffusion XL in just a few steps. They can also both be used as LoRAs.

Implementing Style Aligned in Refiners

Style Aligned is a novel “optimization-free” method, published by Google Research in December 2023, designed to enforce style consistency across generated images of the same batch, for Stable Diffusion based models. This post presents the key principles of this method, how it was implemented in the official Google Research repository, and how we implemented it in Refiners.

Simplifying AI Code with Refiners

Welcome to the world of Refiners, where transforming complex AI model structures into clear and concise models is a reality. Refiners empower machine learning engineers to build models with ease, making the intimidating forests of conditional statements and parameters a thing of the past. This intuitive framework introduces Chains and Context to streamline your code flow, allowing you to focus on unleashing your creativity without worrying about tedious coding.

Level Up Stable Diffusion with IP-Adapter

ControlNet (which received the best paper prize at ICCV 2023 👏) or T2I-Adapters are game changers for Stable Diffusion practitioners. And it is not for no reason: They add a super effective level of control which mitigates hairy prompt engineering They have been designed as “adapters”, i.e. lightweight, composable and cheap units (a single copy of the base model is needed) However, a good dose of prompt engineering is still required to infuse some specific style.