簡易檢索 / 詳目顯示

研究生: 黃意中
Yi-Zhong Huang
論文名稱: X光拍攝系統之電腦輔助設計:人體上下肢左右側辨別
Computer-Aided Design of an X-Ray Imaging System: Left and Right Side Recognition for Human Limbs
指導教授: 徐勝均
Sheng-Dong Xu
口試委員: 蘇順豐
Shun-Feng Su
蔡明忠
Ming-Jong Tsai
郭永麟
Yong-Lin Kuo
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 100
中文關鍵詞: 影像強化直方圖等化影像正規化指尖辨識掌紋偵測
外文關鍵詞: Image Enhancement, Histogram Equalization, Image Normalization, Fingertips Identification, Palmprint Detection
相關次數: 點閱:701下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在現今醫學檢驗中,X光攝影為一種普遍的檢查方式,X光機也成為醫療院所使用頻率最高的醫學影像設備之一。在X光拍攝流程中,放射師在拍攝前必須仔細地檢查患者的投照擺位是否正確,而放射師在操作X光機的過程中,可能會因為個人經驗、專業度、習慣不同與個人情緒因素等影響,造成拍攝患者X光影像時擺位錯誤,令患者接收到不必要的輻射,造成不可逆之傷害,因此人體上下肢左右側辨別為X光檢驗中非常需要的功能。
    本論文研究目的為利用電腦視覺方法辨別人體上下肢左右側。首先,系統前處理將目標物旋轉至同一方向以利尋找手腕或腳踝點供後續處理使用,並可以自訂之基準點統一判斷上肢左右側。於上肢部分,本論文比較手部大拇指尖點與虎口處之點相對於手掌質心點的相對位置,以辨別上肢左右側。接著提出兩種方法,解決在辨別上肢左右側時,手掌正反面左右側外觀形狀相似之問題。其方法藉由:(1)手掌正反面指尖區域的表面材質與膚色不同所造成的灰階強度差異,與(2)掌紋區域紋理特徵差異,來判斷手掌正反面。在尋找上述兩特徵中,本系統使用直方圖等化與影像正規化兩種影像強化方法,於本論文亦探討此兩種影像強化方法之差異與適合的使用方式。於下肢部分,本系統尋找於腳掌中最突出的大拇趾尖點與腳掌最寬處兩點,利用大拇趾位於腳掌內側特性,使用兩種方法辨別左右側。第一種方法計算大拇趾尖點至腳掌最寬處兩點的兩段腳掌輪廓長度,比較兩長度大小辨別左右側。第二種方法利用腳掌質心點為中心,計算大拇趾尖點至腳掌最寬處兩點的兩個夾角,比較兩夾角大小辨別左右側。
    由實驗結果顯示,本論文所提出之人體上下肢左右側辨別方法分別具有88.96%與88.97%的高準確率,並且能夠在X光拍攝流程中即時處理。


    Nowadays, X-ray photography technique is frequently used in medical diagnosis, so X-ray machine has become one of the most frequently used imaging instruments in hospitals. In the procedure of X-ray filming, the radiologist has to carefully check patients’ radiographic position to make sure whether it is in the correct position. During the procedure in the operation for the X-ray machine by the radiologist, the wrong position may be caused by the following factors: personal experience, professional level, difference habits, emotion, etc. It will make patients receive more unnecessary radiation so that the more serious injuries may be formed. Therefore, it is important to create a computer-aided design for the left- and right-side recognition in human limbs.
    The objective of this thesis is to apply the computer vision technique to the recognition of left- and right-side human limbs in radiographic procedure. First, the system will rotate the target image to the designed identical direction so that the wrist and ankle points can be found and the left- and right-side upper limbs can be recognized based on the self-designed reference point. Concerning the upper limbs, we first compare both of the tip of thumb and the thumb-index web with the centroid in the palm to recognize the left- and right-side. Then, we propose two kinds of methods to solve the similarity problem in the shape and appearance for the palm and back of the left- and right- side upper limbs. We distinguish the palm and back of the hands by (1) taking advantage of the different grayscale intensity in fingertips area based on the surface materials and the color of the skin, and (2) the difference of the texture feature in the palmprint area. Two kinds of image enhancement methods, the histogram equalization and the image normalization, are respectively used for the above mentioned feature searching. We also discuss the differences between two image enhancement methods and their appropriate time to be used.
    On the other hand, concerning the lower limbs, we search the great toe point and two widest points of a foot. Taking advantage of the characteristic that the great toe lies in the inner foot, we can adopt two methods to recognize the left- and right-side of the lower limbs. In the first method, we calculate the lengths of two segments respectively from great toe point to two widest points of foot and compare the values to determine the left- and right-side of lower limbs. In the second method, we adopt the centroid of a foot as the center to calculate the angles between great toe point and two widest points of a foot and compare the values of two angles to distinguish two sides.
    Experimental results show that based on the proposed methods the computer-aided design not only can recognize the left- and right-side of human limbs with the high accuracy of 88.96% for upper limbs and 88.97% for lower limbs, but also can real-time process in the clinical radiographic procedure.

    摘要 I ABSTRACT II 誌謝 IV 目錄 V 圖目錄 VIII 表目錄 XII 第1章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 4 1.3 文獻探討 4 1.4 論文架構 6 第2章 系統架構 7 2.1 軟硬體配置 7 2.1.1 系統硬體環境配置 7 2.1.2 攝影機與個人電腦 9 2.1.3 開發環境 11 2.2 X光擺位介紹 11 2.2.1 上肢擺位 12 2.2.2 下肢擺位 14 2.3 系統流程 15 第3章 影像前處理 19 3.1 定義系統處理範圍 19 3.2 前景萃取 21 3.3 連通物件標記 23 3.4 目標物質心點 25 3.5 尋找前景邊緣與系統處理範圍交點 27 3.6 目標物旋轉 29 3.7 尋找手腕點或腳踝點 32 第4章 判斷手部正反面 35 4.1 手掌凸包與凹陷點 36 4.1.1 手部輪廓 37 4.1.2 尋找手部凸包與凸包凹陷 38 4.2 以指尖區域灰階強度判斷手部正反面 40 4.2.1 影像強化 40 4.2.2 尋找指尖區域 43 4.2.3 以指尖區域灰階強度辨別手部正反面 45 4.3 以掌紋紋理判斷手部正反面 47 4.3.1 定義掌紋ROI範圍 48 4.3.2 影像強化 50 4.3.3 邊緣偵測 51 4.3.4 以掌紋紋理判斷手部正反面 56 第5章 上下肢左右側辨別 59 5.1 上肢左右側辨別 59 5.1.1 尋找手部大拇指尖位置 59 5.1.2 尋找手部虎口處位置 60 5.1.3 上肢左右側辨別 62 5.2 下肢左右側辨別 64 5.2.1 腳掌質心點 65 5.2.2 尋找腳部大拇趾尖位置 66 5.2.3 尋找腳掌最寬處 67 5.2.4 下肢左右側辨別 68 第6章 實驗成果 71 6.1 上肢左右側辨別成果 71 6.2 下肢左右側辨別成果 85 第七章 結論 93 7.1 結論 93 7.2 未來展望 94 參考文獻 95

    [1] W. C. Röntgen, Fundamental Contributions to the X-ray: The Three Original Communications on a New Kind of Ray, National Library of Medicine in Bethesda, Maryland, USA, 1972.
    [2] M. F. Collen, The History of Medical Informatics in the United States, Springer London, 2015.
    [3] “U.S. environmental protection agency - radiation protection,” http://www.epa.gov/radiation/understand/health_effects.html
    [4] Pregnancy and Medical Radiation, International Commission on Radiological Protection (ICRP), ICRP Publication 84, Pergamon Press, New York, USA, 2000.
    [5] 陳榮邦、陳啟昌、李三剛、黃國茂,台灣放射線檢查之病人安全流程與作業指引,中華民國放射線醫學會,2010。
    [6] D. Nico, E. Daprati, F. Rigal, L. Parsons, and A. Sirigu, “Left and right hand recognition in upper limb amputees,” Brain, vol. 127, no. 1, pp. 120-132, 2004.
    [7] Z. Li, J. Xu, and T. Zhu, “Recognition of brain waves of left and right hand movement imagery with portable electroencephalographs,” Human-Computer Interaction of Computer Science in Cornell University Library, 2015.
    [8] L. Dipietro, A. M. Sabatini, and P. Dario, “A survey of glove-based systems and their applications,” IEEE Trans. Systems, Man, and Cybernetics, Part C, vol. 38, no.4, pp. 461-482, 2008.
    [9] T. D’Orazio, R. Marani, V. Renò, and G. Cicirelli, “Recent trends in gesture recognition: how depth data has improved classical approaches,” Image and Vision Computing, vol. 52, pp. 56-72, 2016.
    [10] A. R. Sarkar, G. Sanyal, and S. Majumder, “Hand gesture recognition systems: a survey,” International Journal of Computer Applications, vol. 71, no. 15, pp. 26-37, 2013.
    [11] C. Keskin, F. Kirac, Y. E. Kara, and L. Akarun, “Real time hand pose estimation using depth sensors,” IEEE International Conference on Computer Vision Workshops, Barcelona, Spain, Nov. 6-13, 2011, pp. 1228-1234.
    [12] Y. Yao and Y. Fu, “Contour model-based hand-gesture recognition using the Kinect sensor,” IEEE Trans. Circuits and Systems for Video Technology, vol. 24, no. 11, pp. 1935-1944, 2014.
    [13] Y. Li, “Multi-scenario gesture recognition using Kinect,” International Conference on Computer Games, Louisville, USA, Jul. 30-Aug. 1, 2012, pp. 126-130.
    [14] Y. Zhou, G. Jiang, and Y. Lin, “A novel finger and hand pose estimation technique for real-time hand gesture recognition,” Pattern Recognition, vol. 49, pp. 102-114, 2016.
    [15] F. Dominio, M. Donadeo, and P. Zanuttigh, “Combining multiple depth-based descriptors for hand gesture recognition,” Pattern Recognition Letters, vol. 50, pp. 101-111, 2014.
    [16] Z. Ren, J. Yuan, J. Meng, and Z. Zhang, “Robust part-based hand gesture recognition using Kinect sensor,” IEEE Trans. Multimedia, vol. 15, no. 5, pp. 1110-1120, 2013.
    [17] X. Wu, C. Yang, Y. Wang, H. Li, and S. Xu, “An intelligent interactive system based on hand gesture recognition algorithm and Kinect,” International Symposium on Computational Intelligence and Design, vol. 2, Hangzhou, China, Oct. 28-29, 2012, pp. 294-298.
    [18] Y. Wang, C. Yang, X. Wu, S. Xu, and H. Li, “Kinect based dynamic hand gesture recognition algorithm research,” International Conference on Intelligent Human-Machine Systems and Cybernetics, vol. 1, Nanchang, China, Aug. 26-27, 2012, pp. 274-279.
    [19] D. Xu, Y. L. Chen, C. Lin, X. Kong, and X. Wu, “Real-time dynamic gesture recognition system based on depth perception for robot navigation,” IEEE International Conference on Robotics and Biomimetics, Guangzhou, China, Dec. 11-14, 2012, pp. 689-694.

    [20] A. Bellarbi, S. Benbelkacem, N. Zenati-Henda, and M. Belhocine, “Hand gesture interaction using color-based method for tabletop interfaces,” IEEE International Symposium on Intelligent Signal Processing, Floriana, Malta, Sep. 19-21 2011, pp. 1-6.
    [21] “Swissray Global Healthcare Holding Ltd.,” http://www.swissray.com
    [22] Mosby, Mosby’s Medical Dictionary, 9th Edition, Elsevier, 2013.
    [23] M. Keane and M. T. O’Toole, Miller-Keane Encyclopedia and Dictionary of Medicine, Nursing, and Allied Health, 7th Edition, W. B. Saunders, 2003.
    [24] 詹佩儒,「基於深度影像之X 光投照範圍的設定」,國立臺灣科技大學電機工程學系碩士論文,2015。
    [25] K. L. Bontrager, J. P. Lampignano, and L. E. Kendrick, Textbook of Radiographic Positioning and Related Anatomy, 6th Edition, Elsevier, 2014.
    [26] “ASUS X550JX 筆記型電腦產品規格,” https://www.asus.com/tw/Notebooks/X550JX/specifications/
    [27] “Logitech C920 網路攝影機,” http://www.logitech.com/zh-tw/product/hd-pro-webcam-c920
    [28] “Open source computer vision (OpenCV),” http://opencv.org/
    [29] F. Kristensen, P. Nilsson, and V. Owall, “Background segmentation beyond RGB,” Asian Conference on Computer Vision, Hyderabad, India, Jan. 13-16 2006, pp. 602-612.
    [30] E. Osuna, F. Girosit, and R. Freund. “Training support vector machines: an application to face detection,” IEEE Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, Jun. 17-19 1997, pp. 130-136.
    [31] L. Bretzner, I. Laptev, and T. Lindeberg, “Hand gesture recognition using multi-scale colour features, hierarchical models and particle filtering,” IEEE Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, May 21-21, 2002, pp. 423-428.

    [32] A. Malima, E. Ozgur, and M. Cetin, “A fast algorithm for vision-based hand gesture recognition for robot control,” IEEE Conference on Signal Processing and Communications Applications, Antalya, Turkey, Apr. 17-19, 2006, pp. 1-4.
    [33] J. Yang, W. Lu, and A. Waibel, “Skin-color modeling and adaptation,” Asian Conference on Computer Vision, vol. 2, Hong Kong, China, Jan. 8–10, 1998, pp. 687-694.
    [34] M. H. Yang and N. Ahuja, “Gaussian mixture model for human skin color and its applications in image and video databases,” Storage and Retrieval for Image and Video Databases VII, San Jose, CA, USA, Dec. 17, 1998, pp. 458-466.
    [35] M. Soriano, B. Martinkauppi, S. Huovinen, and M. Laaksonen, “Using the skin locus to cope with changing illumination conditions in color-based face tracking,” IEEE Nordic Signal Processing Symposium, Vildmarkshotellet, Kolmården, Norrköping, Sweden, Jun. 13-15, 2000, pp. 393-386.
    [36] J. Li, B. Zhao, H. Zhang, and J. Jiao, “Dual-space skin-color cue based face detection for eye location,” International Conference on Information Engineering and Computer Science, Wuhan, China, Dec. 19-20, 2009, pp. 1-4.
    [37] M. H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting faces in images: a survey,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34-58, 2002.
    [38] R. L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, “Face detection in color images,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 696-706, 2002.
    [39] P. Kakumanu, S. Markrogiannis, and N. Bourbakis, “A survey of skin-color modeling and detection methods,” Pattern Recognition, vol. 40, no. 3, pp. 1106-1122, 2007.
    [40] C. Garcia and G. Tziritas, “Face detection using quantized skin color regions merging and wavelet packet analysis,” IEEE Trans. Multimedia, vol. 1, no. 3, pp. 264-277, 1999.
    [41] K. Sobottka and I. Pitas, “A novel method for automatic face segmentation, facial feature extraction and tracking,” Signal Processing: Image Communication, vol. 12, no. 3, pp. 263-281, 1998.
    [42] H. Samet and M. Tamminen, “Efficient component labeling of images of arbitrary dimension represented by linear bintrees,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 10, no. 4, pp. 579-586, 1988.
    [43] M. B. Dillencourt, H. Samet, and M. Tamminen, “A general approach to connected-component labeling for arbitrary image representations,” Journal of the ACM (JACM), vol. 39, no. 2, 1992.
    [44] 廖崇儒,「可供輪椅使用者使用之基於電腦視覺的手勢辨識系統」,國立臺灣科技大學電機工程學系碩士論文,2010。
    [45] S. Suzuki and K. Abe, “Topological structural analysis of digitized binary images by border following,” Computer Vision, Graphics, and Image Processing, vol. 30, no. 1, pp. 32-46, 1985.
    [46] U. Ramer, “An iterative procedure for the polygonal approximation of plane curves,” Computer Graphics and Image Processing, vol. 1, no. 3, pp. 244-256, 1972.
    [47] D. H. Douglas and T. K. Peucker, “Algorithms for the reduction of the number of points required to represent a digitized line or its caricature,” The International Journal for Geographic Information and Geovisualization, vol. 10, no. 2, pp. 112-122, 1973.
    [48] D. K. Prasad, M. K. H. Leung, C. Quek, and S. Y. Cho, “A novel framework for making dominant point detection methods non-parametric,” Image and Vision Computing, vol. 30, no. 11, pp. 843-859, 2012.
    [49] J. Sklansky, “Finding the convex hull of a simple polygon,” Pattern Recognition Letters, vol. 1, no. 2, pp. 79-83, 1982.
    [50] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd Edition, Pearson, 2007.
    [51] J. Y. Kim, L. S. Kim, and S. H. Hwang, “An advanced contrast enhancement using partially overlapped sub-block histogram equalization,” IEEE Trans. Circuits and Systems for Video Technology, vol. 11, no. 4, pp. 475-484, 2001.

    [52] Y. T. Kim, “Contrast enhancement using brightness preserving bi-histogram equalization,” IEEE Trans. Consumer Electronics, vol. 43, no. 1, pp. 1-8, 1997.
    [53] M. Maisto, M. Panella, L. Liparulo, and A. Proietti, “An accurate algorithm for the identification of fingertips using an RGB-D camera,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 3, no. 2, pp. 272-283, 2013.
    [54] 呂俊緯,「結合掌紋及掌背血管之雙特徵生物認證系統」,國立中央大學資訊工程學系碩士論文,2011。
    [55] L. Hong, Y. Wen, and A. Jain, “Fingerprint image enhancement: algorithm and performance evaluation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 777-789, 1998.
    [56] “Foot: bones in humans | Encyclopedia Britannica,” http://www.britannica.com/science/foot/images-videos/Bones-of-the-foot-showing-the-calcaneus-talus-and-other/101314

    無法下載圖示 全文公開日期 2021/08/23 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE