An Optimized Generative Adversarial Network Model for Virtual Try-On: Enhancing Image Realism with Particle Swarm Optimization Algorithm
DOI:
https://doi.org/10.22399/ijcesen.3049Keywords:
Deep Learning, GAN, Virtual Try-n, Particle Swarm Optimization (PSO)Abstract
Traditional methods of Virtual try-on of clothes wearable products face challenges such as high cost, time-consuming and lack of exact matching; Therefore, nowadays, the use of virtual testing has become important due to its value as an effective aspect as well as reducing time wastage, convenient, accurate and appropriate customer selection when purchasing. The main purpose of testing (simulating images) is to help customers check the size, fit and overall appearance of wearable products in the digital environment. In this paper, a model based on deep learning is presented for testing using a deep generator with a proposed method (PSO). In this method, after receiving the data set, it is divided into two parts, training and testing. The training part is provided by a Generative Adversarial Network (GAN). Then this trained network enters the optimization algorithm (PSO) to improve the weight of the neurons. The results showed that the proposed approach tried to improve the GAN neural network by relying on the meta-heuristic algorithm of particle swarm. Meta-heuristic algorithms have little complexity and have shown good performance in finding optimal points. Also, the proposed approach can significantly reduce costs and time and provide a better match between clothes and body shape of users. This system can be used as an effective tool in designing personalized and industrial clothing.
References
[1] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In NeurIPS.
[2] Zhang, Y., Jauvin, C., & Pal, C. (2018). Fashion-gen: The generative fashion dataset and challenge. arXiv preprint arXiv:1806.08317.
[3] Zhu, S., Fidler, S., Urtasun, R., Lin, D., & Chen, C. L. (2017). Be your own prada: Fashion synthesis with structural coherence. In ICCV.
[4] Zanfir, M., Popa, A. I., Zanfir, A., & Sminchisescu, C. (2018). Human appearance transfer.
[5] Gunel, M., Erdem, E., & Erdem, A. (2018). Language guided fashion image manipulation with feature-wise transformations.
[6] Raj, A., Sangkloy, P., Chang, H., Lu, J., Ceylan, D., & Hays, J. (2018). Swapnet: Garment transfer in single view images. In ECCV.
[7] Han, X., Wu, Z., Jiang, Y. G., & Davis, L. S. (2017). Learning fashion compatibility with bidirectional lstms. In ACM Multimedia.
[8] Veit, A., Kovacs, B., Bell, S., McAuley, J., Bala, K., & Belongie, S. (2015). Learning visual clothing style with heterogeneous dyadic co-occurrences. In CVPR.
[9] Hsiao, W. L., Katsman, I., Wu, C. Y., Parikh, D., & Grauman, K. (2019). Fashion++: Minimal edits for outfit improvement. In ICCV.
[10] Ge, Y., Song, Y., Zhang, R., Ge, C., Liu, W., & Luo, P. (2021). Parser-free virtual try-on via distilling appearance flows. In CVPR.
[11] Han, X., Hu, X., Huang, W., & Scott, M. R. (2019). Clothflow: A flow-based model for clothed person generation. In ICCV.
[12] He, S., Song, Y. Z., & Xiang, T. (2022). Style-based global appearance flow for virtual try-on. In CVPR.
[13] Minar, M. R., Tuan, T. T., Ahn, H., Rosin, P., & Lai, Y. K. (2020). Cp-vton+: Clothing shape and texture preserving image-based virtual try-on. In CVPR Workshops.
[14] Wang, B., Zheng, H., Liang, X., Chen, Y., Lin, L., & Yang, M. (2018). Toward characteristic-preserving image-based virtual try-on network. In ECCV.
[15] Brock, A., Donahue, J., & Simonyan, K. (2018). Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096.
[16] Wang, B., Zheng, H., Liang, X., Chen, Y., Lin, L., & Yang, M. (2018). Toward characteristic-preserving image-based virtual try-on network. In ECCV.
[17] Liu, Z., Luo, P., Qiu, S., Wang, X., & Tang, X. (2016). DeepFashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR.
[18] Yang, H., Zhang, R., Guo, X., Liu, W., Zuo, W., & Luo, P. (2020). Towards photo-realistic virtual try-on by adaptively generating-preserving image content. In CVPR.
[19] Issenhuth, T., Mary, J., & Calauz`enes, C. (2020). Do not mask what you do not need to mask: a parser-free virtual try-on. In ECCV.
[20] Choi, S., Park, S., Lee, M., & Choo, J. (2021). VITON-HD: High-resolution virtual try-on via misalignment-aware normalization. In ICCV.
[21] Cui, A., McKee, D., & Lazebnik, S. (2021). Dressing in order: Recurrent person image generation for pose transfer, virtual try-on and outfit editing. In ICCV.
[22] Dong, H., Liang, X., Shen, X., Wang, B., Lai, H., Zhu, J., Hu, Z., & Yin, J. (2019). Towards multi-pose guided virtual try-on network. In ICCV.
[23] Fenocchi, E., Morelli, D., Cornia, M., Baraldi, L., Cesari, F., & Cucchiara, R. (2022). Dual-branch collaborative transformer for virtual try-on. In CVPR Workshops.
[24] Fincato, M., Landi, F., Cornia, M., Fabio, C., & Cucchiara, R. (2020). VITON-GT: An image-based virtual try-on model with geometric transformations. In ICPR.
[25] Bińkowski, M., Sutherland, D. J., Arbel, M., & Gretton, A. (2018). Demystifying MMD GANs. In Proceedings of the International Conference on Learning Representations.
[26] Dhariwal, P., & Nichol, A. (2021). Diffusion models beat GANs on image synthesis. In Advances in Neural Information Processing Systems.
[27] Han, X., Wu, Z., Yu-Gang, J., & Davis, L. S. (2018). Viton: An image-based virtual try-on network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
[28] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). GANs trained by a two time-scale update rule converge to a local Nash Equilibrium. In Advances in Neural Information Processing Systems, 30. arXiv:1706.08500.
[29] Ho, J., & Salimans, T. (2022). Classifier-Free Diffusion Guidance. arXiv:2207.12598.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Computational and Experimental Science and Engineering

This work is licensed under a Creative Commons Attribution 4.0 International License.