簡易檢索 / 詳目顯示

研究生: 方偉力
Wei-Li Fang
論文名稱: 一個突顯影像特點的低計算量和高辨識率的人臉辨識方法
An Enhanced Low-Computation and High-Recognition Face Recognition Approach by Characterizing Image Features
指導教授: 楊英魁
Ying-Kuei Yang
口試委員: 黎碧煌
none
陳俊良
none
孫宗瀛
none
蘇仲鵬
none
莊西政
none
學位類別: 博士
Doctor
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 中文
論文頁數: 98
中文關鍵詞: 人臉辨識特徵擷取二維主成份分析法特徵權重調適影像分割最小平均平方法
外文關鍵詞: face recognition, feature extraction, two-dimensional principle component analysis, entropy, feature weights adjusting, sub-image, least mean square
相關次數: 點閱:245下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文為改善人臉辨識演算法之成效,在基於二維主成份分析法的基礎下,提出了一種計算量更低且辨識率更高的演算法,此法的架構可分為三個部份,第一個部份為以熵值觀念強化二維主成份分析法(an entropy-enhanced two-dimensional principal component analysis, EE2DPCA),其所依據的理論背景為使用相鄰投影向量的資訊量變化情況有效地選擇出適合的特徵向量數目,故可以降低原本在選擇特徵向量時的計算量。本論文所提的第二個部份為分部特徵分解之二維主成份分析法(sub-image two-dimensional principal component analysis, SI2DPCA),此法針對二維的人臉資料進行分割區塊的處理,如此將可達到降低計算量及提升辨識率的效果,使用合適的分割數目,將圖片資料分割為數區以減少計算量,同時因為分區處理的關係,每一區的特徵更容易被取得,進而獲得良好的特徵擷取成效。本論文所提的第三個部份為結合最小平均平方法的分部特徵分解之二維主成份分析法 (the integration of least mean square with sub-image two-dimensional principal component analysis, LMS-SI2DPCA)。將分部特徵分解之二維主成份分析法的投影向量加上權重值,並以辨識率的結果為導向,使用最小平均平方法來求得最適合的權重值,進而使得系統的辨識率提升。本論文以ORL資料庫為測試對象,實驗結果證實本論文所提之法可有效降低計算量,並且可達到99%的辨識率。


    The 2DPCA is a good approach for two-dimensional face image recognition, and therefore its enhanced approaches have been proposed hoping to improve face recognition rate while mostly at the expense of computation cost. In this paper, a novel approach is proposed to greatly improve face recognition rate and reduce computation cost simultaneously. This approach has three stages. The first stage is to apply an entropy-enhanced two-dimensional principal component analysis (EE2DPCA). Since entropy indicates the amount of uncertainty information, this mechanism calculates the slope difference of the information between two consecutive projected feature vectors. Based on the slope difference, the number of projected vectors can therefore be selected until an image feature does not contribute much to face recognition. The second stage is called sub-image two-dimensional principal component analysis (SI2DPCA). This stage divides a whole face image into smaller sub-images to increase the weight of features for better feature extraction. Meanwhile, the computation cost that mainly comes from the heavy and complicated operations against matrices is reduced due to the smaller size of sub-images. The third stage is the integration of least mean square with sub-image two-dimensional principal component analysis (LMS-SI2DPCA). In this stage, the SI2DPCA is applied against a face image to extract important image features for selection. A weight is then assigned to each of selected image features according to the feature’s importance to face recognition. The least mean square (LMS) algorithm is further applied to optimize the feature weights based on the recognition error rate during learning process. The complete approach consists of the mechanism mentioned in above three stages. The experiments have been conducted against ORL face image database to make performance comparisons among the proposed approach and several existing better-known approaches. The experimental results have demonstrated that the proposed approach has not only low computation burden but also excellent face recognition rate of 99% .

    摘要................................................................................................................................I Abstract..........................................................................................................................II 誌謝..............................................................................................................................III 目錄.............................................................................................................................IV 符號索引....................................................................................................................VI 圖表索引...................................................................................................................VII 第一章 緒論………………………………………………………………………….1 1.1 研究背景與動機………………………......…………………………...1 1.2 研究目的…………………………………………………………………...2 1.3 研究架構…………………………………………………………………...5 1.4 論文架構…………………………………………………………………...5 第二章 文獻探討…………………………………………………………………….6 2.1 主成份分析法............................................................……………………….6 2.2 拓樸式主成份分析法...................................................................................14 2.3 主成份分析法加上線性鑑別分析法………..............................………...15 2.4 對稱影像修正和位元面特徵熔合之主成份分析法…………….............16 2.5 二維主成份分析法………………………...............................……………17 2.6 二維平方主成份分析法………………....……………………………..…19 2.7 二維主成份分析法結合二維線性鑑別分析法……………….....……...20 2.8 量化量測法結合二維主成份分析法……………………………………...21 2.9 混合式統計二維主成份分析法…………………......………………….22 2.10 混合式雙向二維主成份分析法……………................………………….23 第三章 研究方法…………………………………………………………………....25 3.1 以熵值觀念強化二維主成份分析法...........................................................25 3.2分部特徵分解之二維主成份分析法……………………………………...36 3.3 結合最小平均平方法的分部特徵分解之二維主成份分析法...................54 第四章 實驗結果……………………………………………………………………64 4.1 測試資料庫介紹…………………………………………………………...64 4.2 使用熵值求取最適當特徵向量數目……………………………………...64 4.3 熵值運算結合分部特徵分解之二維主成份分析法的實驗分析…….…..72 4.4 熵值運算之分部特徵分解之二維主成份分析法結合最小平均平方法對ORL資料庫之處理………………………………………………………….....79 第五章 結論與未來研究方向...................................................................................85 5.1 結論...............................................................................................................85 5.2 未來研究方向...............................................................................................87 參考文獻……………………………………………………………………………..88 附錄..............................................................................................................................95 圖2.1 二維資料近似圖..............................................................................................7 圖2.2 主成份分析法示意圖....................................................................................11 圖2.3 二維資料轉換至一維資料示意圖................................................................12 圖2.4 特徵分解示意圖............................................................................................13 圖2.5 TPCA與PCA重建程度比較圖....................................................................14 圖2.6 AR資料庫裡其中一位受測者的人臉資料..................................................16 圖2.7 Yale資料庫裡其中一位受測者的人臉資料.................................................16 圖2.8 2DPCA+VM與2DPCA+DM在不同特徵向量數目的辨識率表示...........22 圖3.1 投影向量的模值趨勢圖........................................................................29 圖3.2 熵值趨勢圖....................................................................................................30 圖3.3 熵值斜率圖....................................................................................................31 圖3.4 熵值斜率圖(使用每一個個體計算)..............................................................32 圖3.5 熵值斜率差值圖............................................................................................34 圖3.6 ORL資料庫中每位受測者的照片...............................................................35 圖3.7 ORL資料庫平均值照片...............................................................................36 圖3.8 特徵分解示意圖............................................................................................39 圖3.9 原始圖片大小................................................................................................41 圖3.10 等份分割後每區圖片示意圖......................................................................42 圖3.11 分部分割後之降維示意圖..........................................................................46 圖3.12 整張圖片處理之降維示意圖......................................................................47 圖3.13 ORL 資料庫 原始圖片...............................................................................48 圖3.14 經2DPCA處理後返回人臉域的圖片........................................................49 圖3.15 使用SI2DPCA(分割數:4)後返回人臉域的圖片.......................................50 圖3.16 使用SI2DPCA(分割數:16)後返回人臉域的圖片.....................................50 圖3.17 經SI2DPCA處理後之特徵域圖................................................................52 圖3.18 經2DPCA處理後之特徵域圖.....................................................................52 圖3.19 適應性濾波器方塊圖..................................................................................55 圖3.20 投影向量之權重分配情況示意圖......................................................57 圖3.21 本研究之整體架構流程圖..........................................................................58 圖3.22 LMS誤差值的計算方式.............................................................................60 圖3.23 疊代機制系統架構圖..................................................................................60 圖3.24 SI2DPCA結合LMS之運算流程圖...........................................................61 圖3.25 權重表示圖..................................................................................................62 圖4.1 2DPCA使用於ORL資料庫之不同特徵向量數目的辨識率表現............65 圖4.2 編號1-2之受測者的投影向量的模值趨勢圖....................................66 圖4.3 編號1-2之受測者的熵值趨勢圖.................................................................66 圖4.4 編號1-2之受測者的熵值斜率圖..................................................................67 圖4.5 編號1-2之受測者的熵值斜率差值圖.........................................................68 圖4.6 編號2-1之受測者的投影向量的模值趨勢圖.....................................69 圖4.7 編號2-1之受測者的熵值趨勢圖.................................................................69 圖4.8 編號2-1之受測者的熵值斜率圖.................................................................70 圖4.9 編號2-1之受測者的熵值斜率差值圖.........................................................70 圖4.10 使用平均值的熵值趨勢圖..........................................................................71 圖4.11 使用平均值的熵值斜率圖...........................................................................72 圖4.12 使用平均值的熵值斜率差值圖..................................................................72 圖4.13 分部分割後左上方區域之熵值斜率差值圖..............................................74 圖4.14 分部分割後左下方區域之熵值斜率差值圖..............................................74 圖4.15 分部分割後右上方區域之熵值斜率差值圖..............................................75 圖4.16 分部分割後右下方區域之熵值斜率差值圖..............................................75 圖4.17 最近鄰居法則運算流程圖..........................................................................78 圖4.18 投影向量之權重分配示意圖(使用ORL資料庫)..............................79 圖4.19 辨識率與疊代次數關係圖..........................................................................80 表2.1 PCA+F-LDA辨識率表現(%)........................................................................15 表2.2 PCA+F-LDA計算量表現(sec)......................................................................15 表2.3 AR資料庫對各種演算法的辨識率..............................................................17 表2.4 Yale資料庫對各種演算法的辨識率.............................................................17 表2.5 ORL資料庫對各種演算法的辨識率............................................................20 表2.6 ORL資料庫對各種演算法的辨識率及運算時間比較................................21 表2.7 MP2DPCA與2DPCA在不同特徵向量數目的辨識率比較........................23 表2.8 混合式雙向2DPCA與2DPCA的辨識率及運算時間比較........................24 表3.1 特徵分解計算量比較表................................................................................40 表3.2 計算量分析....................................................................................................43 表3.3 Big order運算複雜度分析.............................................................................45 表3.4 Big order運算複雜度分析(使用m比較).....................................................45 表4.1 ORL資料庫計算量分析................................................................................76 表4.2 辨識率比較....................................................................................................78 表4.3 辨識率及計算量比較表................................................................................84 表4.4 第1次權重值變化.........................................................................................95 表4.5 第2次權重值變化.........................................................................................96 表4.6 第80次權重值變化.......................................................................................97 表4.7 第81次權重值變化.......................................................................................98

    [1] P. Vageeswaran, K. Mitra and R. Chellappa, “Blur and illumination robust face recognition via set-theoretic characterization,” IEEE Trans. Image Process., vol. 22, no. 4, pp. 1362-1372, Apr. 2013.
    [2] N. S. Vuvol. “Exploring patterns of gradient orientations and magnitudes for face recognition,” IEEE Trans. Inf. Forensics Security, vol. 8, no. 2, pp. 295-304, Feb. 2013.
    [3] A. R. Rivera, J. R. Castillo and O. Chae, “Local directional number pattern for face analysis: face and expression recognition,” IEEE Trans. Image Process. vol. 22, no. 5, pp. 1740-1752, May. 2013.
    [4] J. Lu and Y. P. Tan, “Cost-sensitive subspace analysis and extensions for face recognition,” IEEE Trans. Inf. Forensics Security, vol. 8, no. 3, pp. 510-519, Mar. 2013.
    [5] B. F. Klare and A. K. Jain, “Heterogeneous face recognition using kernel prototype similarities,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1410-1422, Jun. 2013.
    [6] A. Z. Kouzani, F. He and K. Sammut, “Towards invariant face recognition,” Inf. Sci., vol. 123, pp. 75-101, 2000.
    [7] Wei-Li Fang, Ying-Kuei Yang and JungKuei Pan, “A low-computation approach for human face recognition,” Int. J. Pattern Recogn. Artif. Intell., vol. 26, no. 6, pp. 1256015-1 - 1256015-23, Dec. 2012.
    [8] K. Lam and H. Yan, “An analytic-to-holistic approach for face recognition based on a single frontal view,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 7, pp. 673-686, Jul. 1998.
    [9] M. A. Turk and A. P. Pentland, “Eigenfaces for recognition,” J. Cognitive Neurosci., vol. 3, pp. 71-86, 1991.
    [10] M. J. Er, S. Wu, J. Lu and H. L. Toh, “Face recognition with radial basis function RBF neural networks,” IEEE Trans. Neural Netw., vol. 13, no. 3, pp. 697-710, May. 2002.
    [11] F. Yang and M. Paindavoine, “Implementation of an RBF neural network on embedded systems: Real-time face tracking and identity verification,” IEEE Trans. Neural Netw., vol. 14, no. 5, pp. 1162-1175, Sep. 2003.
    [12] J. Lu, K. N. Plataniotis and A. N. Venetsanopoulos, “Face recognition using kernel direct discriminant analysis algorithms,” IEEE Trans. Neural Netw., vol. 14, no. 1, pp. 117-126, Jan. 2003.
    [13] B. K. Gunturk, A. U. Batur and Y. Altunbasak, “Eigenface-domain super-resolution for face recognition,” IEEE Trans. Image Process., vol. 12, no. 5, pp. 597-606, May. 2003.
    [14] B. L. Zhang, H. Zhang and S. S. Ge, “Face recognition by applying wavelet subband representation and kernel associative memory,” IEEE Trans. Neural Netw., vol. 15, no. 1, pp. 166-177, Jan. 2004.
    [15] Q. Liu, X. Tang, H. Lu and S. Ma, “Face recognition using kernel scatter-difference-based discriminant analysis,” IEEE Trans. Neural Netw., vol. 17, no. 4, pp. 1081-1085, Jul. 2006.
    [16] W. Zheng, X. Zhou, C. Zou and L. Zhao, “Facial expression recognition using kernel canonical correlation analysis (KCCA),” IEEE Trans. Neural Netw., vol. 17, no. 1, pp. 233-238, Jan. 2006.
    [17] X. Tan, S. Chen, Z. H. Zhou and F. Zhang, “Recognizing partially occluded, expression variant faces from single training image per person with SOM and soft k-NN ensemble,” IEEE Trans. Neural Netw., vol. 16, no. 4, pp. 875-886, Jul. 2005.
    [18] P. Melin, O. Mendoza and O. Castillo, “Face recognition with an improved interval type-2 fuzzy logic Sugeno integral and modular neural networks,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 41, no. 5, pp. 1001-1012, Sep. 2011.
    [19] N. Sudha, A. R. Mohan and P. K. Meher, “A self-configurable systolic architecture for face recognition system based on principal component neural network,” IEEE Trans. Circuits Syst. Video Technol., vol. 21, no. 8, pp. 1071-1084, Aug. 2011.
    [20] W. W. W. Zou and P. C. Yuen, “Very low resolution face recognition problem,” IEEE Trans. Image Process., vol. 21, no. 1, pp. 327-340, Jan. 2012.
    [21] N. S. Vu and A. Caplier, “Enhanced patterns of oriented edge magnitudes for face recognition and image matching,” IEEE Trans. Image Process., vol. 21, no. 3, pp. 1352-1365, Mar. 2012.
    [22] J. Y. Choi, Y. M. Ro and K. N. Plataniotis, “Color local texture features for
    color face recognition,” IEEE Trans. Image Process., vol. 21, no. 3, pp. 1366-1380, Mar. 2012.
    [23] H. Chen, Y. Y. Tang, B. Fang and J. Wen, “Illumination invariant face recognition using FABEMD decomposition with detail measure weight,” Int. J. Pattern Recogn. Artif. Intell., vol. 25, pp. 1261- 1273, 2011.
    [24] H. Yu, J. J. Zhang and X. Yang, “Tensor-based feature representation with application to multimodal face recognition,” Int. J. Pattern Recogn. Artif. Intell., vol. 25, pp. 1197-1217, 2011.
    [25] G. Chiachia, A. N. Marana, T. Ruf and A. C. Ernst, “Histograms: A simple feature extraction and matching approach for face recognition,” Int. J. Pattern Recogn. Artif. Intell., vol. 25, pp. 1337-1348, 2011.
    [26] G. He, Y. Tang, B. Fang and P. S. P. Wang, “Bionic face recognition using Gabor transformation,” Int. J. Pattern Recogn. Artif. Intell., vol. 25, pp. 391-402, 2011.
    [27] H. Huang, J. Liu and H. Feng, “Uncorrelated local Fisher discriminant analysis for face recognition,” Int. J. Pattern Recogn. Artif. Intell., vol. 25, pp. 863-887, 2011.
    [28] K. Ruba Soundar and K. Murugesan, “An adaptive face recognition in combined global and local preserving feature space,” Int. J. Pattern Recogn. Artif. Intell., vol. 25, pp. 99-115, 2011.
    [29] X. Chen and J. Zhang, “Maximum variance difference based embedding approach for facial feature extraction,” Int. J. Pattern Recogn. Artif. Intell., vol. 24, no. 7, pp. 1047-1060, 2010.
    [30] Pattern recognition and artificial intelligence in biometrics-editorial, vol.22, no. 3, 2008, including following 4:
    http://www.worldscinet.com/ijprai/22/2203/S02180014082203.html
    [31] S. W. Lee, P. S. P. Wang, S. N. Yanushkevich and S. W. Lee, “Noniterative 3D face reconstruction based on photometric stereo,” Int. J. Pattern Recogn. Artif. Intell., vol. 22, no. 3, pp. 389-410, 2008.
    [32] F. Y. Shih, C. F. Chuang and P. S. P. Wang, “Performance comparisons of facial expression recognition in Jaffe database,” Int. J. Pattern Recogn. Artif. Intell., vol. 22, no. 3, pp. 445-459, 2008.
    [33] F. Y. Shih, S. Cheng, C. F. Chuang and P. S. P. Wang, “Extracting faces and facial
    features from color images,” Int. J. Pattern Recogn. Artif. Intell., vol. 22, no. 3, pp. 515-534, 2008.
    [34] Y. Luo, M. L. Gavrilova and P. S. P. Wang, “Facial metamorphosis using geometrical methods for biometric applications,” Int. J. Pattern Recogn. Artif. Intell., vol. 22, pp. 555-584, 2008.
    [35] J. Yang, D. Zhang, A. F. Frangi and J. Y. Yang, “Two-dimensional PCA: A new approach to appearance-based face representation and recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 1, pp. 131-137, Jan. 2004.
    [36] J. Lu, X. Yuan and T. Yahagi, “A method of face recognition based on fuzzy c-means clustering and associated sub-NNs,” IEEE Trans. Neural Netw., vol. 18, no. 1, pp. 150-160, Jan. 2007.
    [37] L. Sirovich and M. Kirby, “Low-dimensional procedure for characterization of human faces,” J. Optical Soc. Am., vol. 4, pp. 519-524, 1987.
    [38] M. Kirby and L. Sirovich, “Application of the KL procedure for the
    characterization of human faces,” IEEE Trans. Pattern Analysis and Machine Intelligence., vol. 12, no. 1, pp. 103-108, Jan. 1990.
    [39] K. Ratakonda and N. Ahuja, “Lossless image compression with multiscale segmentaion,” IEEE Trans. Image Processing, vol. 11, iss. 11, pp. 1228-1237, Nov. 2002.
    [40] C. E. Shannon, “A mathematical theory of communication,” Bell System Technical Journal, vol. 27, no. 3, pp. 379-423, 1948.
    [41] S. Haykin, Adaptive Filter Theory, 4rd Edition, Prentice-Hall, 2001.
    [42] “The ORL face database”, http://www.cl.cam.ac.uk/research/dtg/attarchive/
    facedatabase.html.
    [43] K. L. Yeh, “Principal component analysis with missing data,” pp. 1-12, Jun. 2006, available at
    http://www.cmlab.csie.ntu.edu.tw/~cyy/learning/tutorials/PCAMissingData.pdf.
    [44] http://episte.math.ntu.edu.tw/entries/en_lagrange_mul/index.html
    [45] http://zh.wikipedia.org/zh-tw/%E6%8B%89%E6%A0%BC%E6%9C%97%E6%
    97%A5%E4%B9%98%E6%95%B0
    [46] S. J. Leon, Linear algebra with applications, seventh ed. Prentice Hall, 2006.
    [47] 方偉力,「以主成份分析法和線性鑑別分析法辨識想像左右手動」。國立臺灣師範大學機電科技研究所碩士論文,2007。
    [48] A. Pujol, J. Vitria, F. Lumbreras and J. J. Villanueva, “Topological principal component analysis for face encoding and recognition,” Pattern Recognition Lett., pp. 769-776, 2001.
    [49] H. Wang, Z. Wang, Y. Leng, X. Wu and Q. Li, “PCA plus F-LDA: A new approach to face recognition,” Int. J. Pattern Recogn. Artif. Intell., vol. 21, no. 6, pp. 1059-1068, 2007.
    [50] W. H. Yang and D. Q. Dai, “Two-dimensional maximum margin feature extraction for face recognition,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 4, pp. 1002-1012, Aug. 2009.
    [51] H. Wang, Y. Leng, Z. Wang and X. Wu, “Application of image correction and bit-plane fusion in generalized PCA based face recognition,” Pattern Recognition Lett., vol. 28, pp. 2352-2358, Aug. 2007.
    [52] K. I. Kim, K. Jung and H. J. Kim, “Face recognition using kernel principal component analysis,” IEEE Signal Process. Lett., vol. 9, no. 2, pp. 40-42, Feb. 2002.
    [53] D. Zhang and Z. H. Zhoua, “(2D)2PCA: Two-directional two-dimensional PCA for efficient face representation and recognition,” Neurocomputing, vol. 69, pp. 224-231, Jun. 2005.
    [54] P. Sanguansat, W. Asdornwised, S. Jitapunkul and S. Marukatat, “Tow-dimensional linear discriminant analysis of principle component vectors for face recognition,” ICASSP 2006, pp. 345-348, May. 2006.
    [55] J. Meng and W. Zhang, “Volume measure in 2DPCA-based face recognition,” Pattern Recognition Lett., vol. 28, pp. 1203-1208, Jan. 2007.
    [56] H. Wang, S. Chen, Z. Hu and B. Luo, “Probabilistic two-dimensional principal component analysis and its mixture model for face recognition,” Springer Neural Comput & Applic, vol. 17, pp. 541-547, 2008.
    [57] Y. G. Kim, Y. J. Song, U. D. Chang, D. W. Kim, T. S. Yun and J. H. Ahn, “Face recognition using a fusion method based on bidirectional 2DPCA,” Applied Mathematics and Computation., vol. 205, pp. 601-607, 2008.
    [58] Y. Qi and J. Zhang, “(2D)2PCALDA: An efficient approach for face recognition,” Applied Mathematics and Computation., vol. 213, no. 1, pp. 1-7, Jul. 2009.
    [59] K. Ratakonda and N. Ahuja, “Lossless image compression with multiscale segmentaion,” IEEE Trans. Image Processing, vol. 11, iss. 11, pp. 1228-1237, Nov. 2002.
    [60] C. E. Shannon, “A mathematical theory of communication,” Bell System Technical Journal, vol. 27, no. 3, pp. 379-423, 1948.
    [61] https://zh.wikipedia.org/wiki/%E6%9E%81%E5%80%BC
    [62] 3D Computer Vision Winter Term 2005/06, Nassir Navab.
    [63] S. Haykin, Adaptive Filter Theory, 4rd Edition, Prentice-Hall, 2001.
    [64] M. H. Costa and J. C. M. Bermudez, “A robust variable step size algorithm for LMS adaptive filters,” ICASSP 2006, vol 3, pp. 93-96, 2006.
    [65] N. Sun, H. X. Wang, Z. H. Ji, C. R. Zou and L. Zhao, “An efficient algorithm for kernel two-dimensional principal component analysis,” Neural Comput & Applic., vol. 17, pp. 59-64, 2008.
    [66] C. Y. Lu and D. S. Huang, “Optimized projections for sparse representation based classification,” Neurocomputing, vol. 113, pp. 213-219, Mar, 2013.
    [67] W. Yang, C. Sun and K. Ricanek, “Sequential row–column 2DPCA for face recognition,” Neural Comput & Applic., vol. 21, pp. 1729-1735, 2012.
    [68] Y. Xu, D. Zhang, J. Yang and J. Y. Yang, “An approach for directly extracting features from matrix data and its application in face recognition,” Neurocomputing, vol. 71, pp. 1857-1865, Feb, 2008.
    [69] W. Yang, C. Sun and L. Zhang, “A multi-manifold discriminant analysis method for image feature extraction,” Pattern Recognition, vol. 44, pp. 1649-1657, Feb. 2011.

    QR CODE