Greatness in Simplicity: Unified Self-cycle Consistency for Parser-free Virtual Try-on

Thirty-seventh Conference on Neural Information Processing Systems
🔥(NeurIPS 2023)🔥
1Wuhan University of Technology, 2Sanya Science and Education Innovation Park,
3Wuhan Textile University, 4Shanghai AI Laboratory

Video


Abstract

Image-based virtual try-on tasks remain challenging, primarily due to inherent complexities associated with non-rigid garment deformation modeling and strong feature entanglement of clothing within human body.

Recent groundbreaking formulations, such as in-painting, cycle consistency, and knowledge distillation, have facilitated self-supervised generation of try-on images. However, these paradigms necessitate the disentanglement of garment features within human body features through auxiliary tasks, such as leveraging 'teacher knowledge' and dual generators.

The potential presence of irresponsible prior knowledge in the auxiliary task can serve as a significant bottleneck for the main generator (e.g., 'student model') in the downstream task. Moreover, existing garment deformation methods lack the ability to perceive the correlation between the garment and the human body in the real world, leading to unrealistic alignment effects.

To tackle these limitations, we present a new network for parser-free virtual try-on called unified self-cycle consistency (USC-PFN), which enables robust translation between different garments using just a single generator, faithfully replicating non-rigid geometric deformation of garments in real-life scenarios. Specifically, we first propose a self-cycle consistency architecture with a circular mode. It utilizes real unpaired garment-person images exclusively as input for training, effectively eliminating the impact of irresponsible prior knowledge at the model input end. Additionally, we formulate a Markov Random Field to simulate a more natural and realistic garment deformation. Furthermore, USC-PFN can leverage a general generator for self-supervised cycle training.

Experiments demonstrate that our method achieves state-of-the-art performance on a popular virtual try-on benchmark.


Paper and Supplementary Material

[Paper] [Code]

NeurIPS, 2023.
Chenghu Du, Junyin Wang, Shuqing Liu, Shengwu Xiong*.
"USC-PFN: Unified Self-cycle Consistency for Parser-free
Virtual Try-on"


Experiments

The test pair and test results on VITON dataset are shown this and here, from left to right are reference person, target clothes, try-on results of five baseline methods including CP-VITON+ (CVPRW 2020), ACGPN (CVPR 2020), DCTON (CVPR 2021), RT-VITON (CVPR 2022), and USC-PFN.


Results





BibTeX


    @inproceedings{du2023greatness,
        title={Greatness in Simplicity: Unified Self-Cycle Consistency for Parser-Free Virtual Try-On},
        author={Du, Chenghu and Wang, Junyin and Liu, Shuqing and Xiong, Shengwu},
        booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
        year={2023}
        pages={1--12},
    }