西安电子科技大学学报 ›› 2021, Vol. 48 ›› Issue (6): 151-160.doi: 10.19665/j.issn1001-2400.2021.06.019
收稿日期:
2020-09-09
出版日期:
2021-12-20
发布日期:
2022-02-24
作者简介:
刘云瑞(1994—),男,西安电子科技大学硕士研究生,E-mail: 基金资助:
Received:
2020-09-09
Online:
2021-12-20
Published:
2022-02-24
摘要:
SVM-2K模型是一种采用了非光滑合页损失的多视角学习算法。由于非光滑模型的求解过程较为复杂,因此引入了光滑的最小二乘损失的最小二乘支持向量机作为一种经典的支持向量机算法。由于光滑的最小二乘损失的最小二乘支持向量机算法具有计算简单、运算速度快、精度高等优点,被广泛应用于科研领域。为了提高模型的训练速度,在SVM-2K模型中引入了最小二乘思想。首先,提出了完全应用最小二乘损失的LSSVM-2K模型,利用最小二乘损失替换SVM-2K模型中的合页损失,可通过求解线性方程组代替经典多视角学习模型的二次规划求解方法;其次,为探究最小二乘损失对SVM-2K模型的影响,提出了在另外两个部分应用最小二乘损失的模型——LSSVM-2KI和LSSVM-2KII。将新模型与其他多视角学习模型,如SVM+(可分为SVM+A和SVM+B)、MVMED、RMvLSTSVM和SVM-2K模型在同样条件下应用在动物特征数据集(AWA)、UCI手写数字集(Digits)和森林覆盖面积数据集上,以检验新模型的有效性。实验结果表明,3种新模型具有良好的分类表现。特别是LSSVM-2KI模型,在分类精度上更具优势;LSSVM-2K模型不仅在分类精度上效果较好,而且在计算速度上也具有较大的优势;LSSVM-2KII模型在分类效果和训练时间上介于两者之间。
中图分类号:
刘云瑞,周水生. 最小二乘损失在多视角学习中的应用[J]. 西安电子科技大学学报, 2021, 48(6): 151-160.
LIU Yunrui,ZHOU Shuisheng. Application of least squares loss in the multi-view learning algorithm[J]. Journal of Xidian University, 2021, 48(6): 151-160.
表1
AWA数据集的实验结果对比"
AWA模型 | SVM+A | SVM+B | MVMED | RMvLSTSVM | SVM-2K | LSSVM-2KI | LSSVM-2KII | LSSVM-2K |
---|---|---|---|---|---|---|---|---|
Chi vs Leo | 80.62(0.54) | 84.62(0.46) | 81.51(0.52) | 83.53(0.55) | 84.60(0.79) | 89.63(0.37) | 84.52(0.38) | 85.84(0.52) |
Chi vs Rac | 74.46(0.67) | 84.17(0.44) | 79.43(0.71) | 74.66(0.76) | 64.93(0.79) | 86.91(0.54) | 85.53(0.76) | 84.89(0.44) |
Chi vs Zeb | 83.62(0.45) | 94.60(0.15) | 93.51(0.26) | 95.03(0.17) | 91.61(0.30) | 95.07(0.24) | 95.71(0.22) | 95.92(0.16) |
Leo vs Rac | 63.37(0.8) | 76.93(0.40) | 56.38(0.22) | 81.84(0.35) | 56.48(0.22) | 78.58(0.60) | 74.54(0.50) | 77.01(0.40) |
Leo vs Zeb | 78.07(0.42) | 90.66(0.30) | 86.94(0.59) | 92.11(0.38) | 86.49(0.76) | 91.42(0.37) | 90.94(0.43) | 90.68(0.36) |
Rac vs Zeb | 75.47(0.73) | 89.68(0.50) | 60.37(0.30) | 86.31(0.52) | 60.37(0.30) | 89.08(0.35) | 90.00(0.49) | 91.17(0.29) |
平均时间/s | 0.037 9 | 0.037 5 | 11.490 8 | 0.027 2 | 0.256 7 | 0.208 6 | 0.068 2 | 0.023 4 |
表2
Digits数据集上的实验结果对比"
UCI模型 | SVM+A | SVM+B | MVMED | RMvLSTSVM | SVM-2K | LSSVM-2KI | LSSVM-2KII | LSSVM-2K |
---|---|---|---|---|---|---|---|---|
fac vs fou | 96.55(0.43) | 90.40(0.85) | 95.11(0.71) | 96.80(0.48) | 96.09(0.49) | 96.96(0.62) | 96.16(0.54) | 96.85(0.69) |
fac vs kar | 95.26(0.31) | 94.67(1.12) | 96.15(0.95) | 96.94(0.58) | 96.99(0.54) | 97.01(0.64) | 97.01(0.49) | 97.06(0.43) |
fac vs mor | 91.08(1.04) | 85.40(2.22) | 83.46(2.40) | 96.95(0.43) | 86.44(2.63) | 95.36(3.19) | 96.19(0.84) | 95.40(3.16) |
fac vs pix | 96.02(0.36) | 96.51(0.72) | 96.36(0.58) | 97.21(0.44) | 95.78(0.54) | 97.76(0.30) | 97.24(0.44) | 97.55(0.52) |
fac vs zer | 95.26(0.31) | 92.89(1.19) | 92.74(0.69) | 97.04(0.45) | 92.68(0.64) | 97.64(0.36) | 96.88(0.33) | 97.56(0.52) |
fou vs kar | 91.38(0.84) | 94.76(1.04) | 96.39(1.03) | 96.56(0.62) | 96.34(1.03) | 96.39(0.65) | 96.58(0.95) | 96.59(0.93) |
fou vs mor | 90.31(0.89) | 83.23(2.57) | 85.00(2.23) | 92.33(0.75) | 85.15(2.26) | 88.60(4.78) | 91.79(0.91) | 92.65(1.79) |
fou vs pix | 90.68(0.56) | 96.51(0.72) | 91.82(0.91) | 96.40(0.66) | 94.49(0.78) | 96.10(0.67) | 95.16(0.75) | 96.03(0.63) |
fou vs zer | 91.38(0.84) | 92.70(0.94) | 93.09(0.46) | 94.59(0.61) | 94.58(0.74) | 95.11(0.79) | 95.16(0.52) | 95.45(0.80) |
kar vs mor | 94.75(1.16) | 86.06(2.72) | 84.96(2.24) | 97.66(0.55) | 85.20(2.20) | 93.86(2.05) | 92.85(1.20) | 93.86(2.05) |
kar vs pix | 94.95(0.92) | 96.51(0.72) | 95.36(0.89) | 96.79(0.69) | 95.36(0.89) | 96.58(0.49) | 95.38(0.53) | 96.51(0.46) |
kar vs zer | 94.75(1.16) | 93.18(0.92) | 96.60(0.24) | 95.43(0.62) | 96.19(0.41) | 96.25(0.38) | 96.84(0.57) | 96.89(0.55) |
mor vs pix | 85.79(2.86) | 96.86(0.70) | 87.04(2.18) | 96.31(0.64) | 94.76(0.69) | 96.03(0.92) | 91.73(2.24) | 91.36(0.71) |
mor vs zer | 85.95(2.87) | 92.94(1.28) | 85.10(2.22) | 94.53(0.79) | 81.85(5.09) | 94.20(0.91) | 92.35(1.71) | 93.31(1.45) |
pix vs zer | 96.51(0.72) | 93.21(0.89) | 92.26(0.70) | 96.88(0.63) | 92.49(0.67) | 96.39(0.38) | 95.69(0.65) | 97.20(0.33) |
平均时间/s | 0.023 1 | 0.024 5 | 12.334 1 | 0.019 5 | 0.201 8 | 0.218 8 | 0.067 7 | 0.012 6 |
[1] |
KANG H, XIA L, YAN F, et al. Diagnosis of Coronavirus Disease 2019 (COVID-19) With Structured Latent Multi-View Representation Learning[J]. IEEE Transactions on Medical Imaging, 2020, 39(8):2606-2614.
doi: 10.1109/TMI.42 |
[2] |
CHEN D R, CHANG Y W, WU H K, et al. Multiview Contouring for Breast Tumor on Magnetic Resonance Imaging[J]. Journal of Digital Imaging, 2019, 32(5):713-727.
doi: 10.1007/s10278-019-00190-7 |
[3] |
WU H, ZHENG K, SFARRA S, et al. Multiview Learning for Subsurface Defect Detection in Composite Products:A Challenge on Thermographic Data Analysis[J]. IEEE Transactions on Industrial Informatics, 2020, 16(9):5996-6003.
doi: 10.1109/TII.9424 |
[4] |
LI J, ZHANG B, LUG, et al. Generative Multi-view and Multi-feature Learning for Classification[J]. Information Fusion, 2019, 45:215-226.
doi: 10.1016/j.inffus.2018.02.005 |
[5] |
WANG H, YANG Y, LIU B. GMC:Graph-based Multi-view Clustering[J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 32(6):1116-1129.
doi: 10.1109/TKDE.69 |
[6] |
WANG H, YANG Y, ZHANG X, et al. Parallel Multi-view Concept Clustering In Distributed Computing[J]. Neural Computing and Applications, 2020, 32(10):5621-5631.
doi: 10.1007/s00521-019-04243-4 |
[7] | DEEPAK P, ANNA J L. Linking and Mining Heterogeneous and Multi-view Data[M]. Bam:Springer Nature Switzerland AG, 2019:1-27. |
[8] | XU C, TAO D, XU C. A Survey on Multi-view Learning[J]. Neural Comput, 2013, 23(7):2031-2038. |
[9] | VAPNIK V, IZMAILOV R. Learning Using Privileged Information:Similarity Control and Knowledge Transfer[J]. Journal of Machine Learning Research, 2015, 16(1):2023-2049. |
[10] | GAMMERMAN A, VOVK V, PAPADOPOULOS H. Statistical Learning and Data Sciences[M]. Bam: Springer International Publishing AG, 2015:3-32. |
[11] | SHARMANSKA V, QUADRIANTO N, LAMPERT C H. Learning to Rank Using Privileged Information[C]// IEEE International Conference on Computer Vision:107.Piscataway:IEEE, 2013:825-832. |
[12] |
HE Y, TIAN Y, LIU D. Multi-view Transfer Learning With Privileged Learning Framework[J]. Neurocomputing, 2019, 335(28):131-142.
doi: 10.1016/j.neucom.2019.01.019 |
[13] | CHAO G, SUN S. Consensus and Complementarity Based Maximum Entropy Discrimination for Multi-view Classification[J]. Information Sciences, 2016, 367:296-310. |
[14] | SUN S, CHAO G. Multi-View Maximum Entropy Discrimination[C]// Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence. California: AAAI Press, 2013:1706-1712. |
[15] |
XIE X. Regularized Multi-view Least Squares Twin Support Vector Machines[J]. Applied Intelligence, 2018, 48(9):3108-3115.
doi: 10.1007/s10489-017-1129-3 |
[16] |
AKAHO S. A Kernel Method for Canonical Correlation Analysis[EB/OL]. [2020-08-28].DOI: 10.1007/s10489-013-0464-2.
doi: 10.1007/s10489-013-0464-2 |
[17] |
HARDOON D R, SZEDMAK S, SHAWE-TAYLOR J. Canonical Correlation Analysis:An Overview with Application to Learning Methods[J]. Neural Computation, 2014, 16(12):2639-2664.
doi: 10.1162/0899766042321814 |
[18] | FARQUHAR J, HARDOON D, MENG H, et al. Two View Learning:SVM-2K,Theory and Practice[J]. Advances in Neural Information Processing Systems, 2005, 18:355-362. |
[19] |
HUANG C, CHUNG F, WANG S. Multi-view L2-SVM and Its Multi-view Core Vector Machine[J]. Neural Networks:the Official Journal of the International Neural Network Society, 2015, 75:110-125.
doi: 10.1016/j.neunet.2015.12.004 |
[20] |
TANG J, TIAN Y, LIU D, et al. Coupling Privileged Kernel Method for Multi-view Learning[J]. Information Sciences, 2019, 481:110-127.
doi: 10.1016/j.ins.2018.12.058 |
[21] |
SUYKENS J A K, VANDEWALLE J. Least Squares Support Vector Machine Classifiers[J]. Neural Processing Letters, 1999, 9(3):293-300.
doi: 10.1023/A:1018628609742 |
[22] | ZHOU S. Sparse LSSVM in Primal Using Cholesky Factorization for Large-Scale Problems[J]. IEEE Transactions on Neural Networks & Learning Systems, 2016, 27(4):783-795. |
[23] | STEINWART I, HUSH D, SCOVEL C. Training SVMs without Offset[J]. Journal of Machine Learning Research, 2011, 12(1):141-202. |
[24] | SHAI S, SHAI B. Understanding Machine Learning:From Theory to Algorithms[M]. Cambridge: Cambridge University Press, 2014:202-215. |
[1] | 王进花, 魏婷, 曹洁, 陈莉. 改进SwinIR的多特征融合图像超分辨率重建[J]. 西安电子科技大学学报, 2024, 51(6): 171-181. |
[2] | 汤书苑, 周一青, 李锦涛, 刘畅, 石晶林. 基于特征校准的双注意力遮挡行人检测器[J]. 西安电子科技大学学报, 2024, 51(6): 25-39. |
[3] | 蔡固顺, 刘锦辉, 张馨丹, 黄钊, 王泉. 基于PINN的非线性电路直流工作点求解方法[J]. 西安电子科技大学学报, 2024, 51(6): 91-103. |
[4] | 张静, 吴慧雪, 张少博, 李云松. 分布式策略下的解码端增强图像压缩网络[J]. 西安电子科技大学学报, 2025, 52(1): 1-13. |
[5] | 梁礼明, 金家新, 李俞霖, 董信. 融合PVTv2和动态感知的视网膜分级算法[J]. 西安电子科技大学学报, 2024, 51(6): 159-170. |
[6] | 梁礼明, 董信, 雷坤, 夏雨辰, 吴健. 融合注意力谱非局部块的视网膜图像质量分级[J]. 西安电子科技大学学报, 2024, 51(4): 102-113. |
[7] | 高迪辉, 盛立杰, 许小冬, 苗启广. 图文跨模态检索的联合特征方法[J]. 西安电子科技大学学报, 2024, 51(4): 128-138. |
[8] | 陈永, 常婷, 张冰旺. 混沌映射与中国剩余定理增强的切换认证方案[J]. 西安电子科技大学学报, 2024, 51(4): 192-205. |
[9] | 逯彦, 廖桂生, 王小鹏. 一种自适应加速的多路径匹配追踪重建算法[J]. 西安电子科技大学学报, 2024, 51(4): 39-50. |
[10] | 陈可嘉, 张雨鹏, 林鸿熙. 句法感知与知识增强的方面级情感分析[J]. 西安电子科技大学学报, 2024, 51(5): 165-178. |
[11] | 管业鹏, 苏光耀, 盛怡. 双向长短期记忆网络的时间序列预测方法[J]. 西安电子科技大学学报, 2024, 51(3): 103-112. |
[12] | 张相南, 高新波, 田春娜. 基于多边形特征池化与融合的复杂文本检测[J]. 西安电子科技大学学报, 2024, 51(3): 113-123. |
[13] | 夏译蓝, 王秀美, 程培涛. 基于多注意力机制的纹理感知视频修复方法[J]. 西安电子科技大学学报, 2024, 51(3): 136-146. |
[14] | 王静, 何苗苗, 丁建立, 李永华. 面向多维时间序列异常检测的时空图卷积网络[J]. 西安电子科技大学学报, 2024, 51(3): 170-181. |
[15] | 衡红军, 喻龙威. 基于多尺度特征信息融合的时间序列异常检测[J]. 西安电子科技大学学报, 2024, 51(3): 203-214. |
|