[CVPR 2025] ArtiFade: Learning to Generate High-quality Subject from Blemished Images

The University of Hong Kong  
*Equal contribution     †Corresponding authors
ArtiFade teaser.

We introduce ArtiFade, the first model to tackle blemished subject-driven generation by adapting vanilla subject-driven methods (e.g., Textual Inversion and DreamBooth) to effectively extract subject-specific information from blemished training data.

Method


Overview of ArtiFade. On the left, we present artifact rectification training, which involves an iterative process of calculating reconstruction loss between an unblemished image and the reconstruction of its blemished embedding. The right-hand side is the inference stage that tests ArtiFade on unseen blemished images. To avoid ambiguity, we (1) simplify the training of Textual Inversion into an input-output form, and (2) use “fine-tuning” and “inference” to respectively refer to the fine-tuning stage of ArtiFade and the use of ArtiFade for subject-driven generation.

Results

Quantitative Comparison

We conduct both in-distribution and out-of-distribution quantitative evaluations of our method and compare it to Textual Inversion with blemished embeddings. comparison to our model.



ArtiFade with Textual Inversion - Visible artifacts



ArtiFade with DreamBooth - Visible artifacts



ArtiFade with DreamBooth - Invisible artifacts



Applications


More Qualitative Comparison



BibTeX

If you find this project useful for your research, please cite the following:

@inproceedings{yang2025artifade,
  title={ArtiFade: Learning to Generate High-quality Subject from Blemished Images},
  author={Yang, Shuya and Hao, Shaozhe and Cao, Yukang and Wong, Kwan-Yee K},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={13167--13177},
  year={2025}
}

This page was adapted from this source code.