簡易檢索 / 詳目顯示

研究生: 吳重昇
Chong-Sheng Wu
論文名稱: 高效能自適應背景濾除與混合式指靜脈身份辨識系統
Effectively Adaptive Background Subtraction and Hybrid Finger-vein Identification System
指導教授: 郭景明
Jing-Ming Guo
口試委員: 林昇源
Sheng-Yuan Lin
徐繼聖
Gee-Sern Hsu
丁建均
Jian-Jiun Ding
楊家輝
Jar-Ferr Yang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 235
中文關鍵詞: 背景濾除移動物件偵測有序抖動法ViBe半色調技術指靜脈生物辨識局部不變特徵特徵描述影像品質評估技術。
外文關鍵詞: ViBe, Finger-Vein Recognition, Vascular Biometrics, Binary Robust Invariant Elementary Feature.
相關次數: 點閱:291下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文有兩項貢獻:高效能自適應背景濾除技術與混合式指靜脈辨識系統,經過詳細的文獻探討與分析,我們針對這兩項研究之過往技術進行影像處理演算法的改良與提升。在與近期前人文獻的數據與效能比較,皆有較好的表現。
    在高效能自適應背景濾除中,本論文提出自適應紋理特徵,並以多層式背景濾除架構快速偵測移動的前景物件。在背景模型的特徵建構中,我們使用RGB色彩特徵的ViBe隨機模型及本論文提出的區域式二元有序抖動漢明紋理比率特徵,此特徵可以透過建立查詢表式的方式加速其運算效能。基於顏色及紋理的背景濾除模型,可有效提升色彩之背景模型的濾除效能。高效能自適應紋理特徵為一個浮點數的紋理變化量,是有別於其它紋理演算法的特徵維度。實驗結果將以最具代表性的公用資料庫ChangeDetection進行數據與效能分析,在與近期文獻的比較可看出,我們提出的方法可達到較高的背景濾除準確度和即時性的運算速度。
    另外,我們提出嶄新的兩階段混合式演算法於指靜脈辨識系統,其第一階段之特徵點比對演算法中,以自適應快速角點之快速二元強健不變特徵進行指靜脈影像之特徵點比對,以自適應臨界值的方式萃取適當數量的角點特徵資訊。基於二元強健不變特徵,以對強健特徵點進行特徵描述;第二階段之多重影像品質相似度評估以影像品質評估技術對輸入影像及資料庫影像感興趣區進行人眼視覺的相似度評估。多重影像品質相似度評估由多個有效的影像品質評估演算法組成,並以多數決投票機制進行辨識。此外,本技術之感興趣區的擷取方式有別於傳統的指靜脈辨識技術,可基於第一階特徵點比對後的特徵點區域進行感興趣區擷取,無需使用任何形態學操作的演算法,同時減少前處理運算複雜度。實驗數據分析共包含自行拍攝的指靜脈資料庫及公用資料。本論文提出之混合式指靜脈演算法具備低運算複雜度(平均單次比對時間< 0.5s),並可於公用資料庫達錯誤相等率約0.69%。


    This thesis presents two contributions, namely effectively adaptive background subtraction and hybrid finger-vein identification system. Compared to the state-of-the-art relevant methods, we can achieve a better performance in both accuracy and complexity.
    In the feature modeling of effectively adaptive background subtraction, we adopt both RGB color feature and local textural feature using Regional Hamming Ratio of binary Ordered Dithering (RHROD). The RGB color feature is one of the most fundamental features in moving object detection applications. However, it is difficult to segment foregrounds with similar background colors. Thus, we combined the regional texture feature using RHROD in pixel-based foreground segmentation process. Binary textural bitmap generated by the pre-defined and trained Ordered Dithering (OD) array is a more effective candidate for the estimation of texture information within individual blocks. As a result, the OD feature becomes an important supplement to RGB color feature. As documented in the experimental results, the uniqueness of this approach can yield superior performance over the existing methods based on the ChangeDetection.net datasets.
    The hybrid finger-vein identification system combined feature point based approach and human visual verification. Herein, the Binary Robust Invariant Elementary Feature (BRIEF) based on FAST feature points (FBRIEF) are presented with the proposed adaptive thresholding strategy. Subsequently, Multi-Image Quality Assessments (MIQA) are utilized to form a second verification after FBRIEF matching. As opposed to former approaches, the Region of Interest (ROI) are directly extracted by the range of normalized feature point area without any morphological operations, which significantly reduces the complexity of pre-processing. Based on this recognition structure, we can achieve an efficient feature point matching by FBRIEF and also have a rigorous verification credited by MIQA process. The quantitative evaluation conducted with both public database FVUSM and self-made dataset shows that both high identification rate and process efficiency are assured. Comparisons with other literature works demonstrate superiority of the proposed method. Experimental result shows that we could achieve the best EER performance 0.69% for FVUSM datasets.

    第一章 緒論 1.1 背景濾除(Background Subtraction) 1.1.1 研究背景與動機 1.1.2 研究目的 1.2 指靜脈生物辨識(Finger-vein Identification) 1.2.1 研究背景與動機 1.2.2 研究目的 1.3 論文架構 第二章 文獻探討 2.1 背景濾除(Background Subtraction) 2.1.1 前言 2.1.2 背景濾除相關演算法 2.1.3 背景濾除遭遇的問題 2.2 指靜脈生物辨識(Finger-vein Identification) 2.2.1 前言 2.2.2 指靜脈辨識相關前處理演算法 2.2.3 特徵點擷取技術探討 2.2.4 指靜脈生物辨識遭遇的問題 第三章 高效能自適應背景濾除 3.1 系統流程(Flow Chart) 3.2 特徵模型(Feature Modeling) 3.3 背景模型(Background Feature Modeling) 3.4 參數自適應(Adaptive Adjustment Scheme) 3.4.1 全域式臨界值調整(Global Threshold Control Method) 3.4.2 區域式參數調整(Local Adjustment Scheme) 3.4.3 色彩模型進行臨界值微調(Color Model Threshold Refinement) 3.5 多層式背景模型架構(Multi-layer Background Model Structure) 3.6 背景紋理更新(Background OD Bitmap Updating Strategy) 3.6.1 自適應二元有序背景紋理 3.6.2 相關紋理演算法效能比較 3.7 實驗結果 3.7.1 ChangeDetection 2012 3.7.2 ChangeDetection 2014 3.7.3 ChangeDetection運算效能 3.7.4 ChangeDetection各類別前景 第四章 混合式指靜脈身分辨識系統 4.1 系統架構(Flow Chart) 4.2 影像前處理(Image Pre-processing) 4.3 角點特徵萃取(Feature Point Extraction) 4.3.1 快速角點偵測(Feature from Accelerated Segment Test) 4.3.2 自適應臨界值(Adaptive Thresholding) 4.4 影像正規化(Image Normalization) 4.5 特徵描述(Feature Descriptor) 4.5.1 二元強健獨立基礎特徵(BRIEF) 4.5.2 積分圖(Integral Image) 4.6 特徵匹配與除錯(Feature Matching) 4.6.1 漢明距離(Hamming Distance) 4.6.2 最佳點匹配 4.6.3 RANSAC 4.7 第一階段:特徵點比對辨識(Feature Point Matched Recognition) 4.8 第二階段:多影像品質評估投票機制(Multi-IQA Voting Strategy) 4.8.1 SSIM、MSSIM與MS-SSIM 4.8.2 PSNR與HPSNR 4.8.3 多種影像品質評估投票機制 4.9 實驗結果 4.9.1 公用資料庫 4.9.2 設備資料庫 4.9.3 效能評估及處理時間 4.10 紅外光靜脈掃描器 4.10.1 硬體資訊 4.10.2 MFC介面系統 第五章 結論與未來展望

    [1] J. k. Aggarwal, and Q. Cai, “Human motion analysis: a review,” IEEE Nonrigid and Articulated Motion Workshop, pp.90-102, June, 1997.
    [2] L. Wang, W. Hu, and T. Tan, “Recent Developments in Human Motion Analysis,” Pattern Recognition, vol. 36, issue 3, pp585-601, March, 2003.
    [3] W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol.34 issue 3, pp.334-352, Aug. 2004.
    [4] T. B. Moeslund, A. Hilton, and V. Kruiiger, “A survey of advances in vision-based human motion capture and analysis,” Computer Vision and Image Understanding, vol. 104, issues 2-3, pp.90-126, Nov.-Dec. 2006.
    [5] D. A. Foryth, O. Arikan, L. Ikemoto, J. O’Brien, and D. Ramanan, ”Computational studies of human motion: Part 1, tracking and motion synthesis,” Foundations and Trends in Computer Graphics and Vision, vol.1, issue 2-3, pp.77-254, 2006.
    [6] P. Turaga, R. Chellappa, V. S. Subrahmanian, and O. Udrea, ” Machine Recognition of Human Activities: A Survey,” IEEE Trans. Circuits And Systems For Video Technology, vol. 18, no. 11, Nov. 2008.
    [7] X. Ji, and H. Liu, “Advances in view-invariant Human motion analysis: a review,” IEEE Trans. Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 40, issue 1, pp.13-24, Jan. 2010.
    [8] J.-M. Guo, C.-C. Lin, M.-F. Wu, C.-H. Chang, and H. Lee, “Complexity reduced face detection using probability-based face mask pre-filtering and pixel-based hierarchical-feature ad adaboosting,” IEEE Signal Processing Letters, vol. 18, no. 8, pp. 447-450, August 2011.
    [9] J.-M. Guo, C.-H. Hsia, Y.-F. Liu, J.-C. Yu, M.-H. Chu, and T.-N. Le, “Contact-free hand geometry-based identification system,” Expert Systems with Applications, vol. 39, no. 14, pp. 11728-11736, October 2012.
    [10] G. K. Venayagamoorthy, V. Moonasar, and K. Sandrasegaran, “Voice recognition using neural networks,”South African Symposium on Communication and Signal Processing, pp. 29-32, September 1998.
    [11] K. P. Hollingsworth, K. W. Bowyer, and P. J. Flynn, “Improved iris recognition through fusion of hamming distance and fragile bit distance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2465-2476, December 2011.
    [12] V. Štruc and N. Pavešić, “Phase congruency features for palm-print verification,” IET Signal Processing, vol. 3, no. 4, pp. 258-268, July 2009.
    [13] S. Prabhakar, A.K. Jain, and S. Pankanti, “Learning fingerprint minutiae location and type,” Pattern Recognition, vol. 36, no. 8, pp. 1847-1857, August 2003.
    [14] M. S. M. Asaari, S. A. Suandi, and B. A. Rosdi, “Fusion of band limited phase only correlation and width centroid contour distance for finger based biometrics,” Expert Systems with Applications, vol. 41, no. 7, pp. 3367-3382, June 2014.
    [15] J. Hashimoto, “Finger vein authentication technology and its future,” Symposium on VLSI Circuits Digest of Technical Papers, pp. 5-8, 2006.
    [16] D. Mulyono and S.-J. Hong, “A study of finger-vein biometric for personal identification,” IEEE International Symposium on Biometrics and Security Technologies, pp. 1-8, April 2008.
    [17] J. Yang, Y. Shi, and J. Yang. "Personal identification based on finger-vein features." Computers in Human Behavior 27.5 (2011): 1565-1570.
    [18] K. W. Ko, J. Lee, M. Ahmadi, and S. Lee. "Development of Human Identification System Based on Simple Finger-Vein Pattern-Matching Method for Embedded Environments."International Journal of Security and Its Applications 9.5 (2015): 297-306.
    [19] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, issue 1-3, pp. 185-203, Aug. 1981.
    [20] C. Stauffer and W. E. L Grimson, “Adaptive background mixture models for real-time tracking,” IEEE International Conference on Computer Vision and Pattern Recognition, vol.2, pp. 246–52, June, 1999.
    [21] C. Stauffer and W. E. L Grimson, “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, Aug. 2000.
    [22] M. Harville, G. Gordon, and J. Woodfill, “Foreground segmentation using adaptive mixture models in color and depth,” Proc. ICCV Workshop Detection and Recognition of Events in Video, July 2001.
    [23] P. KaewTraKulPong and R. Bowden, “An improved adaptive background mixture model for real-Time tracking with shadow detection,” Proc European Workshop Advanced Video Based Surveillance Systems, Sept. 2001.
    [24] D. S. Lee, “Effective Gaussian mixture learning for video background subtraction,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, May 2005.
    [25] K. Kim, T.H. Chalidabhongse, D. Harwood, and L. Davis, “Real-time foreground-background segmentation using codebook model,” Real-Time Imaging, vol. 11, no. 3, pp. 172-185, June. 2005.
    [26] K. Kim, D. Harwood, and L. S. Davis, “Background updating for visual surveillance,” Lecture Notes in Computer Science, vol. 3804, pp. 337-346, 2005.
    [27] Y. T. Chen, C. S. Chen, C. R. Huang and Y. P. Hung, “Efficient hierarchical method for background subtraction,” Pattern Recognition, vol. 40, issue 10, pp. 2076-2715, Oct. 2007.
    [28] J.-M. Guo, Y.-F. Liu, C.-H. Hsia, M.-H. Shih, and C.-S. Hsu, “Hierarchical method for foreground detection using codebook model,” IEEE Transaction on Circuits and Systems for Video Technology, vol. 21, no.6, pp. 804-815, June, 2011.
    [29] J. M. Guo, C. H. Hsia, Y. F. Liu, M. H. Shih, C. H. Chang, and J. Y. Wu, “Fast Background Subtraction Based on a Multilayer Codebook Model for Moving Object Detection”, IEEE Trans. Circuits Syst. Video Technol, vol. 23, no. 10, Oct. 2013.
    [30] O. Barnich and M. Van Droogenbroeck, “ViBe: A universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process., vol. 20, no. 6, pp. 1709–1724, Jun. 2011.
    [31] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: principles and practice of background maintenance,” In Proc. IEEE Conf. Computer Vision, vol. 1, pp. 255–261, Sept. 1999.
    [32] A. Elgammal, D. Harwood, L. S. Davis, “Non-parametric model for background subtraction,” in: Proceedings of European Conference on Computer Vision, pp. 751-767, 2000.
    [33] N. Martel-Brisson, A. Zaccarim, “Moving cast shadow detection from a Gaussian mixture shadow model,” in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 643-648, 2005.
    [34] L. Li, W. Huang, I. Y. H. Gu, Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” IEEE Trans. Image Process. 13(11), pp.1459-1472, 2004.
    [35] T. Horprasert, D. Harwood, L. S Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” in: Proceedings of IEEE ICCV Frame-Rate Workshop, pp. 1-19, 1999.
    [36] E. J. Carmona, J. Martinez-Cantos, and J. Mira, “A new video segmentation method of moving objects based on blob-level knowledge,” Pattern Recognit. Lett., vol. 29, no. 3, pp. 272–285, Feb. 2008.
    [37] R. Cucchiara, C. Grana, M. Piccardi, A. Prati, and S. Sirotti, “Improving shadow suppression in moving object detection with HSV color information,” in Proc. IEEE Conf. Intell. Transportation Syst., pp. 334–339, Aug. 2001.
    [38] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Detection moving objects, ghosts, and shadows in video streams,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 1337–1342, Oct. 2003.
    [39] M. Izadi and P. Saeedi, “Robust region-based background subtraction and shadow removing using color and gradient information,” in Proc. Int. Conf. Pattern Recognit., Dec. 2008, pp. 1–5.
    [40] M. Shoaib, R. Dragon, and J. Ostermann, “Shadow detection for moving humans using gradient-based background subtraction,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., no. 4959698, pp. 773–776, Apr. 2009.
    [41] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2, pp. 246–252, Jun. 1999.
    [42] C. Benedek and T. Sziranyi, “Bayesian foreground and shadow detection in uncertain frame rate surveillance videos,” IEEE Trans. Image Process., vol. 17, no. 4, pp. 608–621, Apr. 2008.
    [43] J. S. Hu and T. M. Su, “Robust background subtraction with shadow and highlight removal for indoor surveillance,” EURASIP J. Adv. Signal Process., vol. 2007, no. 1, pp. 1–14, Jan. 2007.
    [44] G. Xue, J. Sun, and L. Song, “Background subtraction based on phase and distance transform under sudden illumination change,” in Proc. IEEE Int. Conf. Image Process., pp. 3465–3468, Sep. 2010.
    [45] Y. T. Chen, C. S. Chen, C. R. Huang, and Y. P. Hung, “Efficient hierarchical method for background subtraction,” Pattern Recognit., vol. 40, no. 10, pp. 2706–2715, Oct. 2007.
    [46] M. Heikkila and M. Pietikainen, “A texture-based method for modeling the background and detecting moving object,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 4, pp. 657-662, Apr. 2006.
    [47] S. Liao, G. Zhao, V. Kellokumpu, M. Pietikainen, and S. Z. Li,“Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit, pp. 1301–1306, Jun. 2010.
    [48] G.-A. Bilodeau, J.-P. Jodoin, and N. Saunier, “Change detection in feature space using local binary similarity patterns,” in Proc. Int. Conf. Comput. Robot Vis., May 2013, pp. 106–112.
    [49] P. L. St-Charles, G. –A. Bilodeau, R. Bergevin, “SuBSENSE : A Universal Change Detection Method with Local Adaptive Sensitivity,” IEEE Transactions on Image Processing, Nov. 2014.
    [50] Y.-F. Liu, J.-M. Guo, and B.-S. Lai, “Liu, Yun-Fu, Jing-Ming Guo, and Jie-Cyun Yu. "Contrast Enhancement using Stratified Parametric-Oriented Histogram Equalization,” IEEE Trans. Circuits and Systems for Video Technology, 2015.
    [51] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed., Reading, MA: Addison-Wesley, 1992.
    [52] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, NY: Dover, 1964.
    [53] T. W. Ridler and S. Calvard, “Picture thresholding using an iterative selection method,” Proc. IEEE Trans. On Systems, Man, Cybernetics, vol. SMC-8, pp 630–632, 1978.
    [54] N. Otsu, "A threshold selection method from gray-level histograms," IEEE Trans. Sys., Man., Cyber. 9 (1): 62–66, 1975.
    [55] T. Y. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital pictures,” in Proc. Communications of the ACM archive, Vol. 27, no. 3, pp.236-239, Mar. 1984.
    [56] R. M. McCabe, “Fingerprint interoperability standards,” in Proc. Automatic Fingerprint Recognition Systems, N. Ratha and R. Bolle (Eds.), Springer, New York, pp. 433–451, 2004.
    [57] L. Hong, Y. Wan, and A. Jain, “Fingerprint image enhancement: algorithm and performance evaluation,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.20, No.8, pp.777-789, 1998.
    [58] L. Wieclaw, “A Minutiae-based Matching Algorithms In Fingerprint Recognition Systems,” in Proc. Journal of Medical Informatics & Technologies, Vol. 13, ISSN 1642-6037, 2009.
    [59] P. Melin, D. Bravo , and O. Castillo, “Fingerprint Recognition Using odular Neural Networks and Fuzzy Integrals for Response Integration,” in Proc. International Joint Conference on Neural Networks, vol. 4, pp. 2589–2594, 2005.
    [60] P. Mansukhani, S. Tulyakov, and V. Govindaraju, “Using Support Vector Machines to Eliminate False Minutiae Matches During Fingerprint Verification,” in Proc. SPIE Conf. on Biometric Technology for Human Identification IV, 2007.
    [61] H. Moravec, ”Obstacle Avoidance and Navigation in the Real World by Seeing Robot Rover,” in Proc. Tech Report CMU-RI-TR-3, Carnegie-Mellon University, Robotics Institute, September, 1987.
    [62] C. Harris, M. Stephens, ”A Combined Corner and Edge Detector,” in Proc. Alvey Vision Conf., Manchester, UK, pp. 147–151, 1988.
    [63] J. Koenderink,” The structure of images,” in Proc. Biological Cybernetics, 50, pp.363-370, 1984.
    [64] T. Lindeberg,”Detecting Salient Blob-Like Image Structures and Their Scales with a Scale-Space Primal Sketch: A Method for Focus-of-Attention,” in Proc. International Journal of Computer Vision, 11(3), 1993.
    [65] K. Mikolajczy, C. Schmid, ”An affine invariant interest point detector,” in Proc. of the 8th Int. Conf. on Computer Vision, Vancouver, Canada, 2002.
    [66] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, 60(2), 91-110, 2004.
    [67] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” In Computer vision–ECCV, Springer Berlin Heidelberg, pp. 404-417, 2006.
    [68] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” Computer Vision–ECCV, 778-792, 2010.
    [69] R. Ethan, R. Vincent, K. Kurt and B. Gary, “ORB: an efficient alternative to SIFT or SURF,” in Proc. International Conference on Computer Vision, pp. 2–5, 2011.
    [70] R. Edward and D. Tom, “Machine learning for high speed corner detection,” European Conference on Computer Vision, vol. 1, pp. 430–443, May 2006.
    [71] S. Leutenegger, M. Chli and R.Y. Siegwart,”BRISK: Robust Invariant Scalable Keypoints,” in Proc. International Conference on Computer Vision, pp.2548-2555, 2011.
    [72] P.F. Alcantarilla, A. Bartoli and A.J. Davison., “KAZE Features,” in Proc. European Conference on Computer Vision, 2012.
    [73] E. C. Lee and K. R. Park, “Restoration method of skin scattering blurred vein image for finger-vein recognition,” IET Electronics Letters, vol. 45, no. 21, pp. 1074-1076, October 2009.
    [74] D. Wang, J. Li, and G. Memik, “User identification based on finger-vein patterns for consumer electronics devices,” IEEE Transactions on Consumer Electronics, vol. 56, no. 2, pp. 799-804, May 2010.
    [75] N. Miura, A. Nagasaka, and T. Miyatake, “Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification,” Machine Vision and Application, vol. 15, no. 4, pp. 194-203, October 2004.
    [76] H. Qin, L. Qin, and C. Yu, “Region growth-based feature extraction method for finger-vein recognition,” Optical Engineering, vol. 50, no. 5, pp. 057281-057288, May 2011.
    [77] W. Song, T.J. Kim, H.C. Kim, J.H. Choi, H.J. Kong, and S.R. Lee, “A finger-vein verification system using mean curvature”, Pattern Recognition Letters, 32(11):1541 – 1547, 2011.
    [78] J. Peng, N. Wang, A. A. A. El-Latif, Q. Li, and X. Niu, “Finger-vein verification using Gabor filter and SIFT feature matching,” IEEE International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 45-48, July 2012.
    [79] H. G. Kim, E. J. Lee, G. J. Yoon, S. D. Yang, E. C. Lee, and S. M. Yoon, “Illumination normalization for SIFT based finger vein authentication,” Advances in Visual Computing, pp. 21-30, July 2012.
    [80] H. Qin, L. Qin, L. Xue, X. He, C. Yu, and X. Liang, “Finger-vein verification based on multi-features fusion,” Sensors, vol. 13, no. 11, pp. 15048-15067, November 2013.
    [81] E. Mair, G. D. Hager, D. Burschka, M. Suppa, G. Hirzinger, "Adaptive and generic corner detection based on the accelerated segment test." Computer Vision–ECCV 2010. Springer Berlin Heidelberg, pp.183-196 , 2010
    [82] M. Van Droogenbroeck and O. Paquot, “Background subtraction: Experiments and improvements for ViBe,” Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on. IEEE, pp. 32-37, 2012.
    [83] M. Van Droogenbroeck, Marc and O. Barnich. "ViBe: A disruptive method for background subtraction," Background Modeling and Foreground Detection for Video Surveillance, pp. 7.1-7.23, 2014.
    [84] R. A. Ulichney, “The void-and-cluster method for dither array generation,” in Proc. SPIE, Human Vision Visual Processing, Digital Displays IV, vol. 1913, pp. 332–343, 1993.
    [85] J. M. Guo and M. F. Wu, “Improved block truncation coding based on the void-and-cluster dithering approach,” IEEE Trans. Image Processing, vol. 18, no. 1, pp. 211-213, January 2009.
    [86] N. Goyette, P.-M. Jodoin, F. Porikli, J. Konrad, and P. Ishwar, “Changedetection.net: A new change detection benchmark dataset,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, pp. 1–8, Jun. 2012.
    [87] Y. Wang, P.-M. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, and P. Ishwar, “CDnet 2014: An expanded change detection benchmark dataset,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, pp. 387–394, Jun. 2014.
    [88] M. Hofmann, P. Tiefenbacher, and G. Rigoll,”Background segmentation with feedback: The pixel-based adaptive segmenter,” Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 38-43, June 2012.
    [89] P. L. St-Charles, G. A. Bilodeau, and R. Bergevin,”A Self-Adjusting Approach to Change Detection Based on Background Word Consensus,” Applications of Computer Vision (WACV), pp. 990-997, 2015.
    [90] R. H. Evangelio and T. Sikora, ”Complementary background models for the detection of static and moving objects in crowded environments,” Advanced Video and Signal-Based Surveillance (AVSS), pp. 71-76, August 2011.
    [91] M. Sedky, M. Moniri, and C. Chinelushi,”Object segmentation using full-spectrum matching of albedo derived from colour images,” US Patent, 2011.
    [92] T. S. Haines and T. Xiang, “Background subtraction with dirichlet processes,” Computer Vision–ECCV, pp. 99-113, Springer Berlin Heidelberg, 2012.
    [93] R. Wang, F. Bunyak, G. Seetharaman, and K. Palaniappan, K, ”Static and moving object detection using flux tensor with split Gaussian models,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 420-424, 2014.
    [94] B. Wang and P. Dudek, ”A fast self-tuning background subtraction algorithm,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 401-404, 2014.
    [95] L. Maddalena and A. Petrosino, ”The SOBS algorithm: what are the limits? ,” In Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 21-26, 2012.
    [96] M. De Gregorio and M. Giordano, “Change detection with weightless neural networks,” In Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 409-413, 2014.
    [97] P. Viola and M. J. Jones, “Robust real-time object detection,” in Second International Workshop on Statistical and Computational Theories of Vision, July, 2001.
    [98] J.S. Chen and Y.S. Moon, “A Statistical Evaluation Model for Minutiae-Based utomatic Fingerprint Verification Systems,” in Proc. Int. Conf. on Biometrics, LNCS 3832, pp.236–243, 2006.
    [99] M. A. Fischler, R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” in Comm. of the ACM, Vol 24, pp 381-395, 1981.
    [100] Z. Wang, E. P. Simoncelli and A. C. Bovik, "Multi-scale structural similarity for image quality assessment," IEEE Asilomar Conference Signals, Systems and Computers , Nov. 2003.
    [101] Q. Huynh-Thu and M. Ghanbari, “Scope of validity of PSNR in image/video quality assessment,” Electron. Lett., vol. 44, no. 13,pp. 800–801, 2008.
    [102] J. M. Guo and Y. F. Liu, “Joint compression/watermarking scheme using majority-parity guidance and halftone-based block truncation coding,” IEEE Trans. Image Processing, vol. 19, no. 8, pp. 2056-2069, Aug. 2010.
    [103] R. Raghavendra, J. Surbiryala, and C. Busch, “ An efficient finger vein indexing scheme based on unsupervised clustering,” In IEEE International Conference on Security and Behavior Analysis (ISBA), pp. 1-8, March 2015.
    [104] Z. Zivkovic, “Improved adaptive gausian mixture model for background subtraction,” in IEEE International Conference on Pattern Recognition (ICPR), vol. 2, pp. 28–31, August 2004.
    [105] P. Chiranjeevi and S. Sengupta, “Moving object detection in the presence of dynamic backgrounds using intensity and textural features,” Journal of Electronic Imaging, 20.4, 2011.
    [106] P. Chiranjeevi and S. Sengupta, “Neighborhood supported model level fuzzy aggregation for moving object segmentation,” IEEE Transactions on Image Processing, 23.2, pp. 645-657, 2014.
    [107] H. Yin, H. Yang, H. Su, and C. Zhang, ”Dynamic background subtraction based on appearance and motion pattern,” In Multimedia and Expo Workshops (ICMEW), pp. 1-6, July 2013.
    [108] J. Yao and J. M. Odobez, “Multi-layer background subtraction based on color and texture,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, June 2007.
    [109] G. Tzanidou and E. A. Edirisinghe, “A novel approach to extract closed foreground object contours in video surveillance,” In IS&T/SPIE Electronic Imaging, pp. 902615-902615, March 2014.
    [110] Y. Chen, J. Wang, and H. Lu, "Learning sharable models for robust background subtraction," in Multimedia and Expo (ICME), 2015 IEEE International Conference on, pp.1-6, 2015.

    QR CODE