[1] |
Wu X, Hu Z, Sheng L, et al. Styleformer: Real-time arbitrary style transfer via parametric style composition[C]. Montreal: Proceedings of the IEEE/CVF International Conference on Computer Vision,2021: 14618-14627.
|
[2] |
Lin T, Ma Z, Li F, et al. Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer[C]. Online: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021:5141-5150.
|
[3] |
An J, Huang S, Song Y, et al. Artflow: Unbiased image style transfer via reversible neural flows[C]. Online: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2021:862-871.
|
[4] |
Gatys L A, Ecker A S, Bethge M. Image style transfer using convolutional neural networks[C]. Las Vegas: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016:2414-2423.
|
[5] |
Johnson J, Alahi A, Feifei L. Perceptual losses for real-time style transfer and super-resolution[C]. Amsterdam: Proceedings of the European Conference on Computer Vision, 2016:694-711.
|
[6] |
Park D Y, Lee K H. Arbitrary style transfer with style attentional networks[C]. Long Beach: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2019:5880-5888.
|
[7] |
Chen H, Wang Z, Zhang H, et al. Artistic style transfer with internal-external learning and contrastive learning[J]. Advances in Neural Information Processing Systems, 2021, 34(8):26561-26573.
|
[8] |
Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]. Montreal: Proceedings of the International Conference on Neural Information Processing Systems, 2017:2672-2680.
|
[9] |
佟博, 刘韬, 刘畅. 基于生成对抗网络的轴承失效信号生成研究[J]. 电子科技, 2020, 33(4):28-34.
|
|
Tong Bo, Liu Tao, Liu Chang. Research on bearing failure signal generation based on generative adversarial networks[J]. Electronic Science and Technology, 2020, 33(4):28-34.
|
[10] |
Zhu J Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]. Venice: Proceedings of the IEEE International Conference on Computer Vision,2017:2223-2232.
|
[11] |
Li R, Wu C H, Liu S, et al. SDP-GAN:Saliency detail preservation generative adversarial networks for high perceptual quality style transfer[J]. IEEE Transactions on Image Processing, 2020, 30(7):374-385.
|
[12] |
Zhao Y, Wu R, Dong H. Unpaired image-to-image translation using adversarial consistency loss[C]. Glasgow: European Conference on Computer Vision,2020:800-815.
|
[13] |
Xu W, Long C, Wang R, et al. Drb-GAN: A dynamic resblock generative adversarial network for artistic style transfer[C]. Montreal: Proceedings of the IEEE/CVF International Conference on Computer Vision,2021:6383-6392.
|
[14] |
Han K, Wang Y, Tian Q, et al. GhostNet: More features from cheap operations[C]. Seattle: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2020:1577-1586.
|
[15] |
Sandler M, Howard A, Zhu M, et al. Mobilenetv2: Inverted residuals and linear bottlenecks[C]. Salt Lake City: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018:4510-4520.
|
[16] |
Isola P, Zhu J Y, Zhou T, et al. Image-to-image translation with conditional adversarial networks[C]. Honolulu: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2017:1125-1134.
|
[17] |
Chen J, Liu G, Chen X. AnimeGAN: A novel lightweight GAN for photo animation[C]. Guangzhou: Proceedings of the International Symposium on Intelligence Computation and Applications,2019:242-256.
|
[18] |
Huang X, Belongie S. Arbitrary style transfer in real-time with adaptive instance normalization[C]. Venice: Proceedings of the IEEE International Conference on Computer Vision,2017:1501-1510.
|
[19] |
Yi Z L, Zhang H, Tan P, et al. DualGAN: Unsupervised dual learning for image-to-image translation[C]. Venice: Proceedings of the IEEE International Conference on Computer Vision,2017:2849-2857.
|
[20] |
Deng Y, Tang F, Dong W, et al. Arbitrary style transfer via multi-adaptation network[C]. Seattle: Proceedings of the Twenty-eighth ACM International Conference on Multimedia,2020:2719-2727.
|
[21] |
Liu S, Lin T, He D, et al. Adaattn:Revisit attention mechanism in arbitrary neural style transfer[C]. Montreal: Proceedings of the IEEE/CVF International Conference on Computer Vision,2021:6649-6658.
|