[1] |
SU S, DELBRACIO M, WANG J , et al. Deep Video Deblurring for Hand-held Cameras[C]//Proceedings of the 2017 30th IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 237-246.
|
[2] |
CHO S, WAND J, LEE S , et al. Video Deblurring for Hand-held Cameras Using Patch-based Synjournal[J]. ACM Transactions on Graphics, 2012,31(4):64.
|
[3] |
DEBRACIO M, SAPIRO G . Hand-held Video Deblurring via Efficient Fourier Aggregation[J]. IEEE Transactions on Computational Imaging, 2015,1(4):270-283.
doi: 10.1109/TCI.2015.2501245
|
[4] |
TAO X, GAO H, SHEN X , et al. Scale-recurrent Network for Deep Image Deblurring[C]//Proceedings of the 2018 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington: IEEE Computer Society, 2018: 8174-8182.
|
[5] |
KUPYN O, BUDZAN V, MYKHAILCH M , et al. Deblurgan: Blind Motion Deblurring Using Conditional Adversarial Networks[C]//Proceedings of the 2018 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington: IEEE Computer Society, 2018: 8183-8192.
|
[6] |
WU M, LI W, GONG W . Two-frame Convolutional Neural Network for Blind Motion Image Deblurring[J]. Journal of Computer-Aided Design and Computer Graphics, 2018,30(12):2327-2334.
doi: 10.3724/SP.J.1089.2018.17163
|
[7] |
ZHANG K, LUO W, ZHONG Y , et al. Adversarial Spatio-temporal Learning for Video Deblurring[J]. IEEE Transactions on Image Processing, 2019,28(1):291-301.
doi: 10.1109/TIP.2018.2867733
pmid: 30176588
|
[8] |
GOODFELLOW I J, POUGET-ABADIE J, MIRZA M , et al. Generative Adversarial Nets[C]//Advances in Neural Information Processing Systems 27: 3. Vancouver: Neural Information Processing Systems Foundation, 2014: 2672-2680.
|
[9] |
ISOLA P, ZHU J Y, ZHOU T , et al. Image-to-image Translation with Conditional Adversarial Networks[C]//Proceedings of the 2017 30th IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 5967-5976.
|
[10] |
LI C, WAND M . Precomputed Real-time Texture Synconfproc with Markovian Generative Adversarial Networks[C]//Lecture Notes in Computer Science: 9907. Heidelberg: Springer Verlag, 2016: 702-716.
|
[11] |
ARJOVSKY M, CHINTALA S, BOTTOU L . Wasserstein GAN[J/OL]. [2019-5-16]. https://arxiv.org/abs/1701.07875.
|
[12] |
PATHAK D, KRAHENBUHL P, DONAHUE J , et al. Context Encoders: Feature Learning by Inpainting[C]//Proceedings of the 2016 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington: IEEE Computer Society, 2016: 2536-2544.
|
[13] |
JOHNSON J, ALAHI A, FEI-FEI L . Perceptual Losses for Real-time Style Transfer and Super-resolution[C]//Lecture Notes in Computer Science: 9906. Heidelberg: Springer Verlag, 2016: 694-711.
|
[14] |
TIELEMAN T, HINTON G . Lecture 6.5-rmsprop: Divide the Gradient by a Running Average of Its Recent Magnitude[J]. Coursera: Neural Networks for Machine Learning, 2012,4:26-30.
|
[15] |
ZHANG R, ISOLA P, EFROS A A , et al. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric[C]//Proceedings of the 2018 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington: IEEE Computer Society, 2018: 586-595.
|
[16] |
MOORTHY A K, BOVIK A C . A Two-step Framework for Constructing Blind Image Quality Indices[J]. IEEE Signal Processing Letters, 2010,17(5):513-516.
doi: 10.1109/LSP.2010.2043888
|