Finegrain Tech Blog
Using the Refiners library, we have recreated the Clarity AI Upscaler, combining advanced AI techniques like ESRGAN, MultiDiffusion, ControlNet Tile, and custom LoRAs. It has been a request for a long time to have a standalone Python version of the Clarity AI Upscaler, and we are excited to share our results with you.
Refiners 0.4 adds support for Latent Consistency Models and SDXL Lightning, two approaches based on distillation to generate images using Stable Diffusion XL in just a few steps. They can also both be used as LoRAs.
Style Aligned is a novel “optimization-free” method, published by Google Research in December 2023, designed to enforce style consistency across generated images of the same batch, for Stable Diffusion based models. This post presents the key principles of this method, how it was implemented in the official Google Research repository, and how we implemented it in Refiners.
Welcome to the world of Refiners, where transforming complex AI model structures into clear and concise models is a reality. Refiners empower machine learning engineers to build models with ease, making the intimidating forests of conditional statements and parameters a thing of the past. This intuitive framework introduces Chains and Context to streamline your code flow, allowing you to focus on unleashing your creativity without worrying about tedious coding.
ControlNet (which received the best paper prize at ICCV 2023 👏) or T2I-Adapters are game changers for Stable Diffusion practitioners. And it is not for no reason:
They add a super effective level of control which mitigates hairy prompt engineering They have been designed as “adapters”, i.e. lightweight, composable and cheap units (a single copy of the base model is needed) However, a good dose of prompt engineering is still required to infuse some specific style.