Style Aligned is a novel “optimization-free” method, published by Google Research in December 2023, designed to enforce style consistency across generated images of the same batch, for Stable Diffusion based models. This post presents the key principles of this method, how it was implemented in the official Google Research repository, and how we implemented it in Refiners.
Welcome to the world of Refiners, where transforming complex AI model structures into clear and concise models is a reality. Refiners empower machine learning engineers to build models with ease, making the intimidating forests of conditional statements and parameters a thing of the past. This intuitive framework introduces Chains and Context to streamline your code flow, allowing you to focus on unleashing your creativity without worrying about tedious coding.
ControlNet (which received the best paper prize at ICCV 2023 👏) or T2I-Adapters are game changers for Stable Diffusion practitioners. And it is not for no reason: They add a super effective level of control which mitigates hairy prompt engineering They have been designed as “adapters”, i.e. lightweight, composable and cheap units (a single copy of the base model is needed) However, a good dose of prompt engineering is still required to infuse some specific style.