簡易檢索 / 詳目顯示

研究生: Habte Tadesse Likassa
Habte Tadesse Likassa
論文名稱: Affine Transformation Assisted Robust Regression and Image Recovery
Affine Transformation Assisted Robust Regression and Image Recovery
指導教授: 方文賢
Wen-Hsien Fang
口試委員: 賴坤財
Kuen-Tsair Lay
呂政修
Jenq-Shiou Leu
陳郁堂
Yie-Tarng Chen
Cheng-Fu Chou
Cheng-Fu Chou
Shun-Hsyung Chang
Shun-Hsyung Chang
學位類別: 博士
Doctor
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 108
中文關鍵詞: affine transformationL2,1normRobust regressionoutliersSparse errorslow-rank
外文關鍵詞: affine transformation, L2,1norm, Sparse errors, low-rank
相關次數: 點閱:181下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • In this thesis, two robust affine transformation-assisted methods with both real-world applications and methodology development are developed to deal with out-liers and heavy sparse noise. Firstly, we present a new robust regression approachfor head pose estimation and face reconstruction via affine transformations. To berobust against miscellaneous adverse effects such as occlusions, outliers and heavysparse noise, the new algorithm incorporates affine transformations with robustregression for more faithful low-rank plus sparse image representation, where thelow-rank component lies in a union of disjoint subspaces. Consequently, the dis-torted or misaligned images can be rectified by the affine transformations to rendermore accurate regression outcomes. The search of the optimal variables and affinetransformations is cast as a convex optimization problem. To alleviate the com-putational complexity, the Alternating Direction Method of Multipliers (ADMM)approach is employed and a new set of equations are established to update theoptimization variables and affine transformations iteratively in a round-robin man-ner. Moreover, the convergence of these new updating equations is scrutinized aswell.Secondly, we propose a new robust algorithm for image recovery via affine transfor-mations and theL2,1norm. To be robust against various annoying effects, the newalgorithm integrates affine transformations to yield a more accurate low-rank plussparse decomposition. In addition, theL2,1norm is employed to remove the cor-related samples across the images, enabling the new approach to be more resilient
    iito outliers and large variations in the images. The problem is first formulated asa convex optimization problem. Afterward, ADMM is utilized again to derive anew set of updating equations to recursively find the optimization variables andaffine transformations.Simulations show that the two proposed algorithms are superior to the state-of-the-art works in terms of some common metrics for head pose estimation, facereconstruction, and image recovery on some public databases


    In this thesis, two robust affine transformation-assisted methods with both real-world applications and methodology development are developed to deal with out-liers and heavy sparse noise. Firstly, we present a new robust regression approachfor head pose estimation and face reconstruction via affine transformations. To berobust against miscellaneous adverse effects such as occlusions, outliers and heavysparse noise, the new algorithm incorporates affine transformations with robustregression for more faithful low-rank plus sparse image representation, where thelow-rank component lies in a union of disjoint subspaces. Consequently, the dis-torted or misaligned images can be rectified by the affine transformations to rendermore accurate regression outcomes. The search of the optimal variables and affinetransformations is cast as a convex optimization problem. To alleviate the com-putational complexity, the Alternating Direction Method of Multipliers (ADMM)approach is employed and a new set of equations are established to update theoptimization variables and affine transformations iteratively in a round-robin man-ner. Moreover, the convergence of these new updating equations is scrutinized aswell.Secondly, we propose a new robust algorithm for image recovery via affine transfor-mations and theL2,1norm. To be robust against various annoying effects, the newalgorithm integrates affine transformations to yield a more accurate low-rank plussparse decomposition. In addition, theL2,1norm is employed to remove the cor-related samples across the images, enabling the new approach to be more resilient
    iito outliers and large variations in the images. The problem is first formulated asa convex optimization problem. Afterward, ADMM is utilized again to derive anew set of updating equations to recursively find the optimization variables andaffine transformations.Simulations show that the two proposed algorithms are superior to the state-of-the-art works in terms of some common metrics for head pose estimation, facereconstruction, and image recovery on some public databases

    Contents Abstract i Acknowledgements iii List of Figures vii List of Tables viii 1 Introduction 1 1.1 Big data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.1 Characteristics of Big Data . . . . . . . . . . . . . . . . . . 6 1.1.2 Nature of big data . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Significance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Basic Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.1 Low Rank Component . . . . . . . . . . . . . . . . . . . . . 8 1.3.2 Outliers and Heavy Sparse Noise . . . . . . . . . . . . . . . 8 1.3.3 Affine Image Transformations . . . . . . . . . . . . . . . . . 10 1.3.3.1 Strategy . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.4 L2,1 norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.4 Motivation of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.5 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.6 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2 Overviews and Related Works 17 2.1 Outliers and Heavy Sparse Noise . . . . . . . . . . . . . . . . . . . 17 2.2 Robust Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.1 Supervised techniques . . . . . . . . . . . . . . . . . . . . . 18 2.2.2 Unsupervised Techniques: . . . . . . . . . . . . . . . . . . . 19 2.3 Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3.1 Image alignment and recovery . . . . . . . . . . . . . . . . . 20 v Table of Contents vi 2.3.2 Head Pose Estimation . . . . . . . . . . . . . . . . . . . . . 23 2.3.3 Face Reconstruction . . . . . . . . . . . . . . . . . . . . . . 25 2.4 Convex Optimization Techniques . . . . . . . . . . . . . . . . . . . 26 2.4.1 Global convex Optimization . . . . . . . . . . . . . . . . . . 27 2.4.2 Local convex Optimization . . . . . . . . . . . . . . . . . . . 27 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3 Robust Regression 29 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.3 Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.4 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.5 Experimental Results and Discussions . . . . . . . . . . . . . . . . . 39 3.5.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.5.2 Evaluation Protocol and Experimental Setup . . . . . . . . . 40 3.5.3 Synthetic Data Recovery . . . . . . . . . . . . . . . . . . . . 40 3.5.4 Head Pose Estimation . . . . . . . . . . . . . . . . . . . . . 41 3.5.5 Face Reconstruction . . . . . . . . . . . . . . . . . . . . . . 44 3.6 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . 45 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4 Robust Subspace Image Recovery 48 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.3 Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.4 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.5 Experimental Results and Discussions . . . . . . . . . . . . . . . . . 59 4.5.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.5.2 Evaluation Protocol and Experimental Setup . . . . . . . . . 60 4.5.3 Experimental Convergence Performance . . . . . . . . . . . 60 4.5.4 Handwritten Digits . . . . . . . . . . . . . . . . . . . . . . . 61 4.5.5 Natural-Face Images . . . . . . . . . . . . . . . . . . . . . . 63 4.5.6 Video Face Images . . . . . . . . . . . . . . . . . . . . . . 64 4.6 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . 64 4.7 Comparison with the state of the art methods . . . . . . . . . . . . 67 4.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5 Conclusions and Future Works 71 5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Bibliography 79

    Bibliography[1] N. Thakur and S. Devi, “A new method for color image quality assessment,”International Journal of Computer Applications, vol. 15, no. 2, pp. 10–17,2011.[2] R. Basri and D. W. Jacobs, “Lambertian reflectance and linear subspaces,”IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 2,pp. 218–233, 2003.[3] J. Yang, L. Luo, J. Qian, Y. Tai, F. Zhang, and Y. Xu, “Nuclear norm basedmatrix regression with applications to face recognition with occlusion andillumination changes,”IEEE Transactions on pattern analysis and machineintelligence, vol. 39, no. 1, pp. 156–171, 2017.[4] H. T. Likassa and W.-H. Fang, “Robust regression for image alignmentvia subspace recovery techniques,” inProceedings of the 2018 VII Interna-tional Conference on Network, Communication and Computing, pp. 288–293,ACM, 2018.[5] G. Chen, X.-Y. Liu, L. Kong, J.-L. Lu, W. Shu, and M.-Y. Wu, “Jssdr: Joint-sparse sensory data recovery in wireless sensor networks,” in2013 IEEE 9thInternational Conference on Wireless and Mobile Computing, Networkingand Communications (WiMob), pp. 367–374, IEEE, 2013.[6] X. Xiang and T. D. Tran, “Linear disentangled representation learning forfacial actions,”IEEE Transactions on Circuits and Systems for Video Tech-nology, vol. 28, no. 12, pp. 3539–3544, 2018.[7] G. Lerman and T. Maunu, “An overview of robust subspace recovery,”Pro-ceedings of the IEEE, vol. 106, no. 8, pp. 1380–1410, 2018.79
    Robust Methods80[8] M. K. Chung, H. Lee, P. T. Kim, and J. C. Ye, “Sparse topological datarecovery in medical images,” in2011 IEEE International Symposium onBiomedical Imaging: From Nano to Macro, pp. 1125–1129, IEEE, 2011.[9] D. Huang, M. Storer, F. De la Torre, and H. Bischof, “Supervised localsubspace learning for continuous head pose estimation,” inProceedings ofComputer Vision and Pattern Recognition (CVPR), 2011 IEEE Conferenceon, pp. 2921–2928, IEEE, 2011.[10] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, “Rasl: Robust alignmentby sparse and low-rank decomposition for linearly correlated images,”IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11,pp. 2233–2246, 2012.[11] S. Ebadi and E. Izquierdo, “Approximated rpca for fast and efficient recoveryof corrupted and linearly correlated images and video frames,” inProceed-ings Systems, Signals and Image Processing (IWSSIP), 2015 InternationalConference on, pp. 49–52, IEEE, 2015.[12] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma, “Robust principal com-ponent analysis: Exact recovery of corrupted low-rank matrices via con-vex optimization,” inAdvances in neural information processing systems,pp. 2080–2088, 2009.[13] X. Chen, Z. Han, Y. Wang, Y. Tang, and H. Yu, “Nonconvex plus quadraticpenalized low-rank and sparse decomposition for noisy image alignment,”Science China Information Sciences, vol. 59, no. 5, p. 052107, 2016.[14] W. Song, J. Zhu, Y. Li, and C. Chen, “Image alignment by online robustPCA via stochastic gradient descent,”IEEE Transactions on Circuits andSystems for Video Technology, vol. 26, no. 7, pp. 1241–1250, 2016.[15] Y. Wu, B. Shen, and H. Ling, “Online robust image alignment via iterativeconvex optimization,” in2012 IEEE Conference on Computer Vision andPattern Recognition, pp. 1808–1814, IEEE, 2012.[16] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of sub-space structures by low-rank representation,”IEEE Transactions on PatternAnalysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013.
    Robust Methods81[17] T.-H. Oh, Y.-W. Tai, J.-C. Bazin, H. Kim, and I. S. Kweon, “Partial summinimization of singular values in robust pca: Algorithm and applications,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38,no. 4, pp. 744–758, 2016.[18] J.He., D. Zhang., and L. B. T. Tao, “Iterative Grassmannian optimization forrobust image alignment,”Image and Vision Computing, vol. 32(10), pp. 800–813, 2014.[19] F. De la Torre and M. J. Black, “Robust parameterized component analysis:theory and applications to 2d facial appearance models,”Computer Visionand Image Understanding, vol. 91, no. 1-2, pp. 53–71, 2003.[20] E. J. Cand`es, X. Li, Y. Ma, and J. Wright, “Robust principal componentanalysis?,”Journal of the ACM (JACM), vol. 58, no. 3, p. 11, 2011.[21] G. Puglisi and S. Battiato, “A robust image alignment algorithm for videostabilization purposes,”IEEE Transactions on Circuits and Systems forVideo Technology, vol. 21, no. 10, pp. 1390–1400, 2011.[22] H. Xu, C. Caramanis, and S. Mannor, “Robust regression and lasso,” inAdvances in Neural Information Processing Systems, pp. 1801–1808, 2009.[23] Y. Zhang and D.-Y. Yeung, “Worst-case linear discriminant analysis,”inProceedings of Advances in Neural Information Processing Systems,pp. 2568–2576, 2010.[24] X. Cai, C. Ding, F. Nie, and H. Huang, “On the equivalent of low-rank lin-ear regressions and linear discriminant analysis based regressions,” inPro-ceedings of the 19th ACM SIGKDD international conference on Knowledgediscovery and data mining, pp. 1124–1132, ACM, 2013.[25] S. Xiang, Y. Zhu, X. Shen, and J. Ye, “Optimal exact least squares rankminimization,” inProceedings of the 18th ACM SIGKDD international con-ference on Knowledge discovery and data mining, pp. 480–488, ACM, 2012.[26] P. H. Torr and A. Zisserman, “Mlesac: A new robust estimator with appli-cation to estimating image geometry,”Computer Vision and Image Under-standing, vol. 78, no. 1, pp. 138–156, 2000.
    Robust Methods82[27] D. Huang, R. Cabral, and F. De la Torre, “Robust regression,”IEEE Trans-actions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 363–375, 2016.[28] Y. Zhang, D. Shi, J. Gao, and D. Cheng, “Low-rank-sparse subspace repre-sentation for robust regression,” inProceedings of the IEEE Conference onComputer Vision and Pattern Recognition, pp. 7445–7454, 2017.[29] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: A survey,”ACM computing surveys (CSUR), vol. 41, no. 3, p. 15, 2009.[30] G. A. Shaw and H. K. Burke, “Spectral imaging for remote sensing,”Lincolnlaboratory journal, vol. 14, no. 1, pp. 3–28, 2003.[31] J. A. Barria and S. Thajchayapong, “Detection and classification of trafficanomalies using microscopic traffic variables,”IEEE Transactions on Intel-ligent Transportation Systems, vol. 12, no. 3, pp. 695–704, 2011.[32] D. J. Campbell,Robust and Optimal Methods for Geometric Sensor DataAlignment. PhD thesis, The Australian National University (Australia),2018.[33] F. Nie, H. Huang, X. Cai, and C. H. Ding, “Efficient and robust feature selec-tion via joint 2, 1-norms minimization,” inAdvances in neural informationprocessing systems, pp. 1813–1821, 2010.[34] F. Wen, L. Chu, P. Liu, and R. C. Qiu, “A survey on nonconvexregularization-based sparse and low-rank recovery in signal processing,statistics, and machine learning,”IEEE Access, vol. 6, pp. 69883–69906,2018.[35] Y. Li, Y. Lin, X. Cheng, Z. Xiao, F. Shu, and G. Gui, “Nonconvex penalizedregularization for robust sparse recovery in the presence ofsαsnoise,”IEEEAccess, vol. 6, pp. 25474–25485, 2018.[36] J. Andreu-Perez, C. C. Poon, R. D. Merrifield, S. T. Wong, and G.-Z. Yang,“Big data for health,”IEEE journal of biomedical and health informatics,vol. 19, no. 4, pp. 1193–1208, 2015.[37] F. De la Torre and M. J. Black, “Robust principal component analysis forcomputer vision,” inProceedings Eighth IEEE International Conference onComputer Vision. ICCV 2001, vol. 1, pp. 362–369, IEEE, 2001.
    Robust Methods83[38] R. Maronna, R. Martin, and V. Yohai, “Robust statistics theory and meth-ods john wiley & sons,”Inc., USA, 2006.[39] G. E. Box, “Non-normality and tests on variances,”Biometrika, vol. 40,no. 3/4, pp. 318–335, 1953.[40] J. W. Tukey, “A survey of sampling from contaminated distributions,”Con-tributions to probability and statistics, pp. 448–485, 1960.[41] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed op-timization and statistical learning via the alternating direction method ofmultipliers,”Foundations and TrendsR©in Machine Learning, vol. 3, no. 1,pp. 1–122, 2011.[42] F. R. Hampel, “A general qualitative definition of robustness,”The Annalsof Mathematical Statistics, pp. 1887–1896, 1971.[43] E. Y. Chen, “Detecting tcp-based ddos attacks by linear regression analy-sis,” inProceedings of the Fifth IEEE International Symposium on SignalProcessing and Information Technology, 2005., pp. 381–386, IEEE, 2005.[44] M. Stoecklin, “Anomaly detection by finding feature distribution outliers,”inProceedings of the 2006 ACM CoNEXT conference, p. 32, ACM, 2006.[45] Y. Kanda, R. Fontugne, K. Fukuda, and T. Sugawara, “Admire: Anomalydetection method using entropy-based pca with three-step sketches,”Com-puter Communications, vol. 36, no. 5, pp. 575–588, 2013.[46] K. Pearson, “Liii. on lines and planes of closest fit to systems of pointsin space,”The London, Edinburgh, and Dublin Philosophical Magazine andJournal of Science, vol. 2, no. 11, pp. 559–572, 1901.[47] C. Croux and G. Haesbroeck, “Principal component analysis based on robustestimators of the covariance or correlation matrix: influence functions andefficiencies,”Biometrika, vol. 87, no. 3, pp. 603–618, 2000.[48] L. Wang and H. Cheng, “Robust sparse pca via weighted elastic net,” inChinese Conference on Pattern Recognition, pp. 88–95, Springer, 2012.[49] M. Hubert, T. Reynkens, E. Schmitt, and T. Verdonck, “Sparse pca for high-dimensional data with outliers,”Technometrics, vol. 58, no. 4, pp. 424–434,2016.
    Robust Methods84[50] H. T. Likassa, W.-H. Fang, and Y.-A. Chuang, “Modified robust imagealignment by sparse and low rank decomposition for highly linearly corre-lated data,” inProceedings of International Conference on Intelligent GreenBuilding and Smart Grid (IGBSG).[51] M. Hubert, P. J. Rousseeuw, and K. Vanden Branden, “Robpca: a newapproach to robust principal component analysis,”Technometrics, vol. 47,no. 1, pp. 64–79, 2005.[52] C. Chen, J. Huang, L. He, and H. Li, “Fast iteratively reweightedleast squares algorithms for analysis-based sparsity reconstruction,”arXivpreprint arXiv:1411.5057, 2014.[53] H. Barreto and D. Maharry, “Least median of squares and regression throughthe origin,”Computational statistics & data analysis, vol. 50, no. 6, pp. 1391–1397, 2006.[54] H. Abdi, “Partial least squares regression and projection on latent structureregression (pls regression),”Wiley interdisciplinary reviews: computationalstatistics, vol. 2, no. 1, pp. 97–106, 2010.[55] E. Richardson, M. Sela, R. Or-El, and R. Kimmel, “Learning detailed facereconstruction from a single image,” inProceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition, pp. 1259–1268, 2017.[56] M. Shakeri and H. Zhang, “Corola: a sequential solution to moving objectdetection using low-rank approximation,”Computer Vision and Image Un-derstanding, vol. 146, pp. 27–39, 2016.[57] G. Papageorgiou, P. Bouboulis, and S. Theodoridis, “Robust nonlinear re-gression: a greedy approach employing kernels with application to image de-noising,”IEEE Transactions on Signal Processing, vol. 65, no. 16, pp. 4309–4323, 2017.[58] J. Jiang, C. Chen, K. Huang, Z. Cai, and R. Hu, “Noise robust position-patch based face super-resolution via tikhonov regularized neighbor repre-sentation,”Information Sciences, vol. 367, pp. 354–372, 2016.[59] C. Lu, J. Feng, Z. Lin, and S. Yan, “Exact low tubal rank tensor recoveryfrom gaussian measurements,”arXiv preprint arXiv:1806.02511, 2018.
    Robust Methods85[60] C.-F. Chen, C.-P. Wei, and Y.-C. F. Wang, “Low-rank matrix recovery withstructural incoherence for robust face recognition,” in2012 IEEE Conferenceon Computer Vision and Pattern Recognition, pp. 2618–2625, IEEE, 2012.[61] J.-S. Chang, A. C.-C. Shih, H.-Y. M. Liao, and W.-H. Fang, “Principalcomponent analysis-based mesh decomposition,” inProceedings of Multi-media Signal Processing, 2007. MMSP 2007. IEEE 9th Workshop on onMultimedia Signal Processing (MMSP 2007), pp. 292–295, IEEE, 2007.[62] I. Lavva, E. Hameiri, and I. Shimshoni, “Robust methods for geometricprimitive recovery and estimation from range images,”IEEE Transactionson Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 38, no. 3,pp. 826–845, 2008.[63] H. Xu, C. Caramanis, and S. Mannor, “Outlier-robust pca: the high-dimensional case,”IEEE Transactions on information theory, vol. 59, no. 1,pp. 546–572, 2013.[64] J. Lee and Y. Choe, “Robust pca based on incoherence with geometricalinterpretation,”IEEE Transactions on Image Processing, vol. 27, no. 4,pp. 1939–1950, 2018.[65] A. Podosinnikova, S. Setzer, and M. Hein, “Robust pca: Optimization ofthe robust reconstruction error over the stiefel manifold,” inProceedings ofGerman Conference on Pattern Recognition, pp. 121–131, Springer, 2014.[66] N. Shahid, N. Perraudin, V. Kalofolias, G. Puy, and P. Vandergheynst, “Fastrobust pca on graphs,”IEEE Journal of Selected Topics in Signal Processing,vol. 10, no. 4, pp. 740–756, 2016.[67] T. Zhang and G. Lerman, “A novel m-estimator for robust pca,”The Journalof Machine Learning Research, vol. 15, no. 1, pp. 749–808, 2014.[68] S. Li and Y. Fu, “Learning robust and discriminative subspace with low-rankconstraints,”IEEE Transactions on Neural Networks and Learning Systems,vol. 27, no. 11, pp. 2160–2173, 2016.[69] H. Zhang, Z. Lin, C. Zhang, and J. Gao, “Robust latent low rank repre-sentation for subspace clustering,”Neurocomputing, vol. 145, pp. 369–373,2014.
    Robust Methods86[70] E. Chitrahadi and C. Basaruddin, “Evaluation of image enhancement qualitymeasure in robust pca for image specularities removal,” inProceedings ofApplication of Information and Communication Technologies (AICT), 20115th International Conference on, pp. 1–5, IEEE, 2011.[71] G. Liu, Z. Lin, and Y. Yu, “Robust subspace segmentation by low-rank rep-resentation,” inProceedings of the 27th international conference on machinelearning (ICML-10), pp. 663–670, 2010.[72] J. Wang, D. Shi, D. Cheng, Y. Zhang, and J. Gao, “Lrsr: low-rank-sparserepresentation for subspace clustering,”Neurocomputing, vol. 214, pp. 1026–1037, 2016.[73] Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method withadaptive penalty for low-rank representation,” inAdvances in neural infor-mation processing systems, pp. 612–620, 2011.[74] R. Liu, Z. Lin, and Z. Su, “Linearized alternating direction method withparallel splitting and adaptive penalty for separable convex programs inmachine learning,” inProceedings Asian Conference on Machine Learning,pp. 116–132, 2013.[75] J.-F. Cai, E. J. Cand`es, and Z. Shen, “A singular value thresholding algo-rithm for matrix completion,”SIAM Journal on Optimization, vol. 20, no. 4,pp. 1956–1982, 2010.[76] J. Yang and Y. Zhang, “Alternating algorithms for 1-problems in compres-sive sensing,”Rice University CAAM, Tech. Rep. TR09-37, Jun, 2010.[77] Z. Wang, J. Yang, Z. ShiZe, and C. Li, “Robust regression for anomaly de-tection,” inProceedings of Communications (ICC), 2017 IEEE InternationalConference on, pp. 1–6, IEEE, 2017.[78] N. Gillis and S. A. Vavasis, “On the complexity of robust pca and normlow-rank matrix approximation,”arXiv preprint arXiv:1509.09236, 2015.[79] A. Rodriguez and A. Laio, “Clustering by fast search and find of densitypeaks,”Science, vol. 344, no. 6191, pp. 1492–1496, 2014.[80] S. Minaee and Y. Wang, “Image decomposition using a robust regressionapproach,”arXiv preprint arXiv:1609.03874, 2016.
    Robust Methods87[81] J. Qian, J. Yang, F. Zhang, and Z. Lin, “Robust low-rank regularized regres-sion for face recognition with occlusion,” inProceedings of the IEEE Con-ference on Computer Vision and Pattern Recognition Workshops, pp. 21–26,2014.[82] E. Murphy-Chutorian and M. M. Trivedi, “Head pose estimation in com-puter vision: A survey,”IEEE Transactions on pattern analysis and machineintelligence, vol. 31, no. 4, pp. 607–626, 2008.[83] V. Drouard, S. Ba, G. Evangelidis, A. Deleforge, and R. Horaud, “Headpose estimation via probabilistic high-dimensional regression,” in2015 IEEEInternational Conference on Image Processing (ICIP), pp. 4624–4628, IEEE,2015.[84] F. Wen, L. Pei, Y. Yang, W. Yu, and P. Liu, “Efficient and robust recovery ofsparse signal and image using generalized nonconvex regularization,”IEEETransactions on Computational Imaging, vol. 3, no. 4, pp. 566–579, 2017.[85] X. Liu, H. Lu, and D. Zhang, “Head pose estimation based on manifoldembedding and distance metric learning,” inAsian Conference on ComputerVision, pp. 61–70, Springer, 2009.[86] G. Guo, Y. Fu, C. R. Dyer, and T. S. Huang, “Head pose estimation: Clas-sification or regression?,” in2008 19th International Conference on PatternRecognition, pp. 1–4, IEEE, 2008.[87] H. Yu and H. Liu, “Linear regression for head pose analysis,” in2014 Inter-national Joint Conference on Neural Networks (IJCNN), pp. 987–992, IEEE,2014.[88] S. Lathuili`ere, R. Juge, P. Mesejo, R. Munoz-Salinas, and R. Horaud, “Deepmixture of linear inverse regressions applied to head-pose estimation,” inProceedings of the IEEE Conference on Computer Vision and Pattern Recog-nition, pp. 4817–4825, 2017.[89] C. BenAbdelkader, “Robust head pose estimation using supervised mani-fold learning,” inEuropean Conference on Computer Vision, pp. 518–531,Springer, 2010.
    Robust Methods88[90] K. Sundararajan and D. L. Woodard, “Head pose estimation in the wildusing approximate view manifolds,” inProceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition Workshops, pp. 50–58, 2015.[91] M. Demirkus, D. Precup, J. J. Clark, and T. Arbel, “Probabilistic temporalhead pose estimation using a hierarchical graphical model,” inEuropeanconference on computer vision, pp. 328–344, Springer, 2014.[92] V. Drouard, R. Horaud, A. Deleforge, S. Ba, and G. Evangelidis, “Robusthead-pose estimation based on partially-latent mixture of linear regressions,”IEEE Transactions on Image Processing, vol. 26, no. 3, pp. 1428–1440, 2017.[93] K. Diaz-Chito, A. Hern ́andez-Sabat ́e, and A. M. L ́opez, “A reduced featureset for driver head pose estimation,”Applied Soft Computing, vol. 45, pp. 98–107, 2016.[94] G. P. Meyer, S. Gupta, I. Frosio, D. Reddy, and J. Kautz, “Robust model-based 3d head pose estimation,” inProceedings of the IEEE InternationalConference on Computer Vision, pp. 3649–3657, 2015.[95] P. Doll ́ar, P. Welinder, and P. Perona, “Cascaded pose regression,” inPro-ceedings of IEEE Computer Vision and Pattern Recognition (CVPR), 2010IEEE Conference on, pp. 1078–1085, IEEE, 2010.[96] B. Shi, X. Bai, W. Liu, and J. Wang, “Deep regression for face alignment,”arXiv preprint arXiv:1409.5230, 2014.[97] Q. Liu, J. Deng, J. Yang, G. Liu, and D. Tao, “Adaptive cascade regressionmodel for robust face alignment,”IEEE Transactions on Image Processing,vol. 26, no. 2, pp. 797–807, 2017.[98] Q. Liu, J. Deng, and D. Tao, “Dual sparse constrained cascade regressionfor robust face alignment,”IEEE Transactions on Image Processing, vol. 25,no. 2, pp. 700–712, 2016.[99] G. R. Obozinski, M. J. Wainwright, and M. I. Jordan, “High-dimensionalsupport union recovery in multivariate regression,” inAdvances in NeuralInformation Processing Systems, pp. 1217–1224, 2009.
    Robust Methods89[100] Y. Jin and B. D. Rao, “Algorithms for robust linear regression by exploitingthe connection to sparse signal recovery,” in2010 IEEE International Con-ference on Acoustics, Speech and Signal Processing, pp. 3830–3833, IEEE,2010.[101] R. Calabrese, “Regression for recovery rates with both continuous and dis-crete characteristics,” inProceedings of the Italian Statistical Society Con-ference, 2010, Padua, 2010.[102] X.-Y. Zhang, L. Wang, S. Xiang, and C.-L. Liu, “Retargeted least squaresregression algorithm,”IEEE Transactions on neural networks and learningsystems, vol. 26, no. 9, pp. 2206–2213, 2015.[103] P. Huber and E. Ronchetti, “Robust statistics, ser,”Wiley Series in Probabil-ity and Mathematical Statistics. New York, NY, USA: Wiley-IEEE, vol. 52,p. 54, 1981.[104] T. Kim and W. Yu, “Performance evaluation of ransac family,” inProceed-ings of the British Machine Vision Conference (BMVC), pp. 1–12, 2009.[105] Z. Hou and T. Koh, “Image denoising using robust regression,”IEEE SignalProcessing Letters, vol. 11, no. 2, pp. 243–246, 2004.[106] J. Xie, J. Yang, J. Qian, and Y. Tai, “Robust matrix regression for illumi-nation and occlusion tolerant face recognition,” inProceedings of the IEEEInternational Conference on Computer Vision Workshops, pp. 46–53, 2015.[107] J. Xie, J. Yang, J. J. Qian, Y. Tai, and H. M. Zhang, “Robust nuclear norm-based matrix regression with applications to robust face recognition,”IEEETransactions on Image Processing, vol. 26, no. 5, pp. 2286–2295, 2017.[108] Z. Zhang, Z. Zhong, J. Cui, and L. Fei, “Learning robust latent subspace fordiscriminative regression,” in2017 IEEE Visual Communications and ImageProcessing (VCIP), pp. 1–4, IEEE, 2017.[109] Y. Wang, C. Dicle, M. Sznaier, and O. Camps, “Self scaled regularized robustregression,” inProceedings of the IEEE Conference on Computer Vision andPattern Recognition, pp. 3261–3269, 2015.[110] T. Diskin, G. Draskovic, F. Pascal, and A. Wiesel, “Deep robust regres-sion,” inProceedings of Computational Advances in Multi-Sensor Adaptive
    Robust Methods90Processing (CAMSAP), 2017 IEEE 7th International Workshop on, pp. 1–5,IEEE, 2017.[111] S. Xiang, F. Nie, G. Meng, C. Pan, and C. Zhang, “Discriminative leastsquares regression for multiclass classification and feature selection,”IEEETransactions on Neural Networks and Learning Systems, vol. 23, no. 11,pp. 1738–1754, 2012.[112] F. Bunea, Y. She, and M. H. Wegkamp, “Optimal selection of reduced rankestimators of high-dimensional matrices,”The Annals of Statistics, pp. 1282–1309, 2011.[113] D. Zachariah, M. Sundin, M. Jansson, and S. Chatterjee, “Alternating least-squares for low-rank matrix reconstruction,”IEEE Signal Processing Letters,vol. 19, no. 4, pp. 231–234, 2012.[114] X. Zhen, M. Yu, X. He, and S. Li, “Multi-target regression via robust low-rank learning,”IEEE Transactions on Pattern Analysis and Machine Intel-ligence, vol. 40, no. 2, pp. 497–504, 2018.[115] W. E. L. Grimson, D. P. Huttenlocher,et al.,Object recognition by computer:the role of geometric constraints. Mit Press, 1990.[116] A. Land, “Doig. ag,an automatic method for solving discrete programmingproblems,”Econometrica, vol. 28, no. 3, pp. 497–520, 1960.[117] J. R. Beveridge and E. M. Riseman, “Optimal geometric model matching un-der full 3d perspective,”Computer Vision and Image Understanding, vol. 61,no. 3, pp. 351–364, 1995.[118] P. Wunsch and G. Hirzinger, “Registration of cad-models to images by it-erative inverse perspective matching,” inProceedings of 13th InternationalConference on Pattern Recognition, vol. 1, pp. 78–83, IEEE, 1996.[119] P. David, D. Dementhon, R. Duraiswami, and H. Samet, “Softposit: Simul-taneous pose and correspondence determination,”International Journal ofComputer Vision, vol. 59, no. 3, pp. 259–284, 2004.[120] F. Moreno-Noguer, V. Lepetit, and P. Fua, “Pose priors for simultaneouslysolving alignment and correspondence,” inEuropean Conference on Com-puter Vision, pp. 405–418, Springer, 2008.
    Robust Methods91[121] Y.-b. Tong, Q.-s. Zhang, and Y.-p. QI, “Image quality assessing by combin-ing psnr with ssim,”Journal of Image and Graphics, vol. 12, pp. 1758–1763,2006.[122] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli,et al., “Image qualityassessment: from error visibility to structural similarity,”IEEE Transactionson image processing, vol. 13, no. 4, pp. 600–612, 2004.[123] C. Lu, J. Feng, Y. Chen, W. Liu, Z. Lin, and S. Yan, “Tensor robust principalcomponent analysis with a new tensor nuclear norm,”IEEE Transactionson Pattern Analysis and Machine Intelligence, 2018.[124] H. Ji, R. Liu, F. Su, Z. Su, and Y. Tian, “Robust head pose estimationvia convex regularized sparse regression,” in2011 18th IEEE InternationalConference on Image Processing, pp. 3617–3620, IEEE, 2011.[125] M. Yin, D. Zeng, J. Gao, Z. Wu, and S. Xie, “Robust multinomial logis-tic regression based on rpca,”IEEE Journal of Selected Topics in SignalProcessing, vol. 12, no. 6, pp. 1144–1154, 2018.[126] Z. Lin, M. Chen, and Y. Ma, “The augmented lagrange multipliermethod for exact recovery of corrupted low-rank matrices,”arXiv preprintarXiv:1009.5055, 2010.[127] L. Zhuang, H. Gao, Z. Lin, Y. Ma, X. Zhang, and N. Yu, “Non-negative lowrank and sparse graph for semi-supervised learning,” in2012 IEEE Confer-ence on Computer Vision and Pattern Recognition, pp. 2328–2335, IEEE,2012.[128] N. Parikh, S. Boyd,et al., “Proximal algorithms,”Foundations and TrendsR©in Optimization, vol. 1, no. 3, pp. 127–239, 2014.[129] P. Courrieu, “Fast computation of moore-penrose inverse matrices,”arXivpreprint arXiv:0804.4809, 2008.[130] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-pie,”Image and Vision Computing, vol. 28, no. 5, pp. 807–813, 2010.[131] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From few tomany: Illumination cone models for face recognition under variable lightingand pose,”IEEE Transactions on Pattern Analysis and Machine Intelli-gence, vol. 23, no. 6, pp. 643–660, 2001.
    Robust Methods92[132] Y. LeCun, “The mnist database of handwritten digits,”http://yann. lecun.com/exdb/mnist/, 1998.[133] T. Bouwmans, S. Javed, H. Zhang, Z. Lin, and R. Otazo, “On the applica-tions of robust pca in image and video processing,”Proceedings of the IEEE,vol. 106, no. 8, pp. 1427–1457, 2018.[134] S. Wang, Y. Wang, Y. Chen, P. Pan, Z. Sun, and G. He, “Robust pca usingmatrix factorization for background/foreground separation,”IEEE Access,vol. 6, pp. 18945–18953, 2018.[135] Q. Zheng, Y. Wang, and P. A. Heng, “Online subspace learning from gradi-ent orientations for robust image alignment,”IEEE Transactions on ImageProcessing, 2019.[136] J. Yang, W. Yin, Y. Zhang, and Y. Wang, “A fast algorithm for edge-preserving variational multichannel image restoration,”SIAM Journal onImaging Sciences, vol. 2, no. 2, pp. 569–592, 2009.[137] M. Tao and X. Yuan, “Recovering low-rank and sparse components of ma-trices from incomplete and noisy observations,”SIAM Journal on Optimiza-tion, vol. 21, no. 1, pp. 57–81, 2011.[138] R. Vidal, Y. Ma, and S. S. Sastry, “Robust principal component analysis,”inGeneralized Principal Component Analysis, pp. 63–122, Springer, 2016.[139] X. Bian and H. Krim, “Bi-sparsity pursuit for robust subspace recovery,” in2015 IEEE International Conference on Image Processing (ICIP), pp. 3535–3539, IEEE, 2015.[140] M. Soltanolkotabi, E. Elhamifar, E. J. Candes,et al., “Robust subspaceclustering,”The Annals of Statistics, vol. 42, no. 2, pp. 669–699, 2014.[141] Q. Zheng, Y. Wang, and P.-A. Heng, “Online robust image alignment viasubspace learning from gradient orientations,” inProceedings of the IEEEInternational Conference on Computer Vision, pp. 1753–1762, 2017.[142] H. Yong, D. Meng, W. Zuo, and L. Zhang, “Robust online matrix factoriza-tion for dynamic background subtraction,”IEEE Transactions on patternanalysis and machine intelligence, vol. 40, no. 7, pp. 1726–1740, 2018.
    Robust Methods93[143] Z. Lai, D. Mo, J. Wen, L. Shen, and W. K. Wong, “Generalized robust re-gression for jointly sparse subspace learning,”IEEE Transactions on Circuitsand Systems for Video Technology, vol. 29, no. 3, pp. 756–772, 2019.[144] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled facesin the wild: A database for studying face recognition in unconstrained envi-ronments,” tech. rep., Technical Report 07-49, University of Massachusetts,Amherst, 2007.[145] Y. Liu, L. Chen, and C. Zhu, “Improved robust tensor principal componentanalysis via low-rank core matrix,”IEEE Journal of Selected Topics in SignalProcessing, vol. 12, no. 6, pp. 1378–1389, 2018.[146] Y. LeCun, C. Cortes, and C. J. Burges, “Mnist handwritten digit database.at&t labs,” 2010.

    QR CODE