Based on nonconvex and smooth loss, the robust support vector machine (RSVM) is insenstive to outliers for classification problems. However, the existing algorithms for RSVM are not suitable for dealing with large-scale problems, because they need to iteratively solve quadratic programmings, which leads to a large amount of calculation and slow convergence. To overcome this drawback, the method with a faster convergence rate is used to solve the RSVM. Then, by using the idea of least square, a generalized exponentially robust LSSVM (ER-LSSVM) model is proposed, which is solved by the algorithm with a faster convergence rate. Moreover, the robustness of the ER-LSSVM is interpreted theoretically. Finally, ultilizing low-rank approximation of the kernel matrix, the sparse RSVM algorithm (SR-SVM) and sparse ER-LSSVM algorithm (SER-LSSVM) are proposed for handing large-scale problems. Many experimental results illustrate that the proposed algorithm outperforms the related algorithms in terms of convergence speed, test accuracy and training time.