[1] ZHANG M L, ZHOU Z H. A Review on Multi-label Learning Algorithms [J]. IEEE Transactions on Knowledge and Data Engineering, 2014, 26(8): 1819-1837.
[2] TSOUMAKAS G, KATAKIS I. Multi-label Classification: an Overview [J]. International Journal of Data Warehousing and Mining, 2007, 3(3): 1-13.
[3] ZHANG J J, FANG M, LI X. Multi-label Learning with Discriminative Features for Each Label [J]. Neurocomputing, 2015, 154: 305-316.
[4] JOLLIFFE J T. Principal Component Analysis [M]. Berlin: Springer, 2001: 1-9.
[5] HASTIE T, TIBSHIRANI R, FRIEDMAN J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction [M]. 2nd Edition. New York: Springer, 2009: 106-113.
[6] YU K, YU S, TRESP V. Multi-label Informed Latent Semantic Indexing [C]//Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2005: 258-265.
[7] DEERWESTER S C, DUMAIS S T, FUMAS G W, et al. Indexing by Latent Semantic Analysis [J]. Journal of the American Society for Information Science, 1990, 41(6): 391-407.
[8] JI S, TANG L, YU S, et al. A Shared-subspace Learning Framework for Multi-label Classification [J]. ACM Transactions on Knowledge Discovery from Data, 2010, 4(2): 1-29.
[9] PARK C H, LEE M. On Applying Linear Discriminant Analysis for Multi-label Problems [J]. Pattern Recognition Letters, 2008, 29(7): 878-887.
[10] YU Y, PEDRYCZ W, MIAO D. Multi-label Classification by Exploiting Label Correlations [J]. Expert Systems with Applications, 2014, 41(6): 2989-3004.
[11] XU J. Multi-label Core Vector Machine with a Zero Label [J]. Pattern Recognition, 2014, 47(7): 2542-2557.
[12] LI P, LI H, WU M. Multi-label Ensemble Based on Variable Pairwise Constraint Projection [J]. Information Sciences, 2013, 222: 269-281.
[13] ENRIQUE SUCAR L,BIELZA C, MORALES E F, et al. Multi-label Classification with Bayesian Network-based Chain Classifiers [J]. Pattern Recognition Letters, 2014, 41(1): 14-22.
[14] WANG H, DING C, HUANG H. Multi-label Linear Discriminant Analysis [C]//Lecture Notes in Computer Science: 6316 LNCS, PART 6. Berlin: Springer Verlag, 2010: 126-139.
[15] GRETTON A, BOUSQUET O, SMOLA A, et al. Measuring Statistical Dependence with Hilbert-Schmidt Norms [C]//Lecture Notes in Artificial Intelligence: 3734 LNAI. Berlin: Springer Verlag, 2005: 63-77.
[16] ZHANG Y, ZHOU Z H. Multi-label Dimensionality Reduction via Dependence Maximization [C]//Proceedings of the National Conference on Artificial Intelligence: 3. Menlo Park: AAAI, 2008: 1503-1505.
[17] SONG L, SMOLA A, GRETTON A, et al. Feature Selection via Dependence Maximization [J]. Journal of Machine Learning Research, 2012, 13: 1393-1434.
[18] CHANG B, KRUGER U, KUSTRA R, et al. Canonical Correlation Analysis Based on Hilbert-Schmidt Independence Criterion and Centered Kernel Target Alignment [C]//30th International Conference on Machine Learning: 2. Princeton: IMLS, 2013: 975-983.
[19] WANG M, SHA F, JORDAN M I. Unsupervised Kernel Dimension Reduction [C]//Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Red Hook: Curran Associates Incorporated, 2010: 2379-2387.
[20] WANG H, YAN S, XU D, et al. Trace Ratio vs. Ratio Trace for Dimensionality Reduction [C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2007: 108-115.
[21] CHANG C C, LIN C J. LIBSVM: a Library for Supporting Vector Machines [J]. ACM Transactions on Intelligent Systems and Technology, 2011, 2(3): 1-27. |