簡易檢索 / 詳目顯示

研究生: 詹佩儒
Pei-Ru Zhan
論文名稱: 基於深度影像之X光投照範圍的設定
Study on Depth-Image based X-Ray Collimation Field Delimitation
指導教授: 蘇順豐
Shun-feng Su
口試委員: 王偉彥
Wei-yen Wang
翁慶昌
Ching-chang Wong
徐勝均
Sheng-dong Xu
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2015
畢業學年度: 103
語文別: 中文
論文頁數: 99
中文關鍵詞: Kinect投照範圍深度影像前景萃取最大值濾波器Otsu法
外文關鍵詞: Kinect, collimation field, Depth-Image, foreground extraction, maximum filter, Otsu method.
相關次數: 點閱:219下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

在X光投照過程中,放射師藉由調整遮光片決定病人的投照範圍,避免病患接收過多的劑量。在現今的醫療技術中,放射師是以手動或半自動方式調節準直儀,放射師與護理人員在操作X光機過程中,可能會因為個人經驗值、專業度、習慣以及個人情緒等因素而導致投照範圍設定的不適當,因此自動X光投照範圍設定成為X光檢驗的非常需要的功能。
本論文提出了兩個X光投照範圍設定的方法,分別為手腳拍攝程序和胸腔拍攝程序。手腳拍攝程序是以欲拍攝部位與其餘區域深度資訊在深度影像直方圖的不同特徵,尋找適當門檻值取得欲拍攝部位做為系統目標物,為了尋找目標物的重要特徵點手腕點與腳踝點,必需將各方向擺位的目標物旋轉至同一方向,本論文討論兩種目標物旋轉方法,此兩種方法皆適用於本系統。胸腔拍攝程序是以深度影像的邊緣偵測取得影像中垂直的邊緣,為了避免頭部或頭髮邊緣的干擾,本系統使用人體下方資訊定義人體邊緣。
最後,藉由以上兩個程序所得到的特徵與放射師經驗設定投照範圍,由實驗結果可見,本系統可設定適當的投照範圍。


In an X-ray exposure process, the radiologist can adjust the width of the collimator shading film to avoid the patient receiving excessive doses. However, for now, such a process is done manually and is not accurate. Thus, an automatic way of X-ray collimation field delimitation becomes a desired mechanism. Two different methods of X-ray collimation field delimitation for hand and feet radiographic procedure and for chest radiographic procedure, respectively, are proposed in this study. In the hand and feet radiographic procedure, different features of the body part and other regions in the depth histogram are considered to define the threshold for the depth foreground extraction so as to extract the foreground object in a depth image. In order to find the important feature points, like wrist points and ankle points, the foreground object must be rotated to the same direction to distinguish those important features from the image. Thus a couple ways of defining the rotation angle are considered in our study. The results are acceptable for both approaches. In the chest radiographic procedure, the edge detection of the depth image is employed to find edges of human body along the vertical direction. In order to avoid noisy edges caused by the head or long hair, only the lower half part is considered to define the body edge. Finally, based on the obtained information in these two procedures, the width of the X-ray collimation field is determined with the consideration of the allowance according to the radiologist experience. The experiment results show the proposed approach is promising. In other words, the proposed system can indeed suggest an appropriate collimation field.

摘要 Abstract 誌謝 目錄 圖索引 表索引 第一章 緒論 1.1 研究動機與目的 1.2 研究方法 1.3 論文架構 第二章 系統架構 2.1 軟硬體配置 2.1.1 系統架構與投照流程 2.1.2 Kinect與個人電腦 2.2.3 開發環境 2.2 X光擺位介紹 2.2.1手腳擺位 2.2.2胸腔擺位 2.3 系統流程 第三章 系統與深度影像前置處理 3.1 建立系統感興趣區域 3.2 深度影像修復 3.3 手腳投照前景萃取 3.3.1 背景相減 3.3.2 深度影像取前景 3.3.3 Otsu法 3.3.4 最大值濾波器 3.3.6 本論文提出方法 3.3.7 本論文提出方法與Otsu法前景萃取結果比較 3.3.8 深度影像修復對前景的影響 3.3.9 環境光源對前景的影響 3.4 胸腔投照邊緣偵測 3.4.1 深度影像邊緣偵測 3.4.2 深度影像修復對邊緣偵測的影響 3.4.3 環境光源對邊緣偵測的影響 第四章 手腳拍攝程序 4.1 前景萃取 4.2 物件標記 4.3 質心點 4.4 尋找前景邊緣與感興趣區域交點 4.5 尋找旋轉中心 4.5.1雙中心點求目標物斜率 4.5.1單中心點與質心點求目標物斜率 4.5.3 目標物旋轉 4.6 尋找手腕點或腳踝點 4.7 判斷目標物方位 4.8 尋找中心射線在影像中的位置 4.9 框取手腳X光投照範圍 4.10雙中心點求目標物斜率與單中心點與質心點求目標物斜率比較 第五章 胸腔拍攝程序 5.1 邊緣偵測 5.2 尋找胸腔邊界 5.3 尋找中心射線在影像中的位置 5.4 框取胸腔X光投照範圍 第六章 實驗成果 6.1 手腳拍攝程序成果 6.2 胸腔拍攝程序成果 第七章 結論 7.1 研究成果 7.2 未來展望 參考文獻

[1] K.-L. Bontrager, J.-P. Lampignano, and L.-E. Kendrick, Textbook of radiographic positioning and related anatomy. St. Louis, 2014.
[2] D. Grest, J. Woetzel, and R. Koch, “Nonlinear body pose estimation from depth images,” in Proc. of the DAGM Symposium, Vienna, Austria, Aug.-Sep. 2005, pp. 285-292.
[3] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from single depth images,” in Proc. of the 2011 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), June 2011, pp. 1297-1304.
[4] C.-T. Hsieh, H.-C. Wang, Y.-K. Wu, L.-C. Chang, and T.-K. Kuo, ”A Kinect-based people-flow counting system,” in Proc. of the 2012 International Symposium on Intelligent Signal Processing and Communications Systems(ISPACS), New Taipei, Taiwan, Nov. 2012, pp.146–150.
[5] J. Hu, R. Hu, Z. Wang, Y. Gong. and M. Duan, ”Color image guided locality regularized representation for Kinect depth holes filling,” in Proc. of the 2013 IEEE Conference on Visual Communications and Image Processing (VCIP), Kuching, Nov. 2013, pp. 1-6.
[6] Kinect應用程式開發入門(2015). [Online]. Available:
https://msdn.microsoft.com/zh-tw/hh367958.aspx
[7] Camplani, M. and Salgado, L, “Efficient spatiotemporal hole filling strategy for Kinect depth maps,” in Proc. Three-Dimensional Image Processing (3DIP) and Applications , SPIE, 2012.
[8] Visual Studio 2013(2015). [Online].Available:
https://www.visualstudio.com/downloads/download-visual-studio-vs
[9] OpenNI 2 SDK Binaries & Docs(2015). [Online].
Available:http://structure.io/openni
[10] OpenCV(2015). [Online]. Available:http://opencv.org/
[11] J. Fu, S. Wang, Y. Lu, S. Li and W. Zeng, “Kinect-Like depth denoising,” in Proc. of the IEEE International Symposium on Circuits and Systems (ISCAS), Seoul, May. 2012, pp.512-515.
[12] D. Miao, J. Fu, Y. Lu, S. Li and C.-W. Chen, “Texture-assisted Kine ct Depth Inpainting,” in Proc. of the 2012 IEEE International Symposium on Circuits and Systems (ISCAS), Seoul, May. 2012, pp.604-607.
[13] S. Panahi, S. Sheikhi, S. Hadadan, and N. Gheissari, “Evaluation of background subtraction methods,” in Proc. of the 2008 IEEE Conference on Digital Image Computing: Techniques and Applications (DICTA), Canberra, ACT, Dec. 2008, pp. 357–364.

[14] 黃偉恩,Kinect之X光輔助系統:校正準直儀與偵測器系統間角度與目標物厚度偵測。碩士論文,國立台灣科技大學,台北,台灣,2015。
[15] B. Ahirwal, M. Khadtare and R. Mehta, “FPGA based system for color space transformation RGB to YIQ and YCbCr” in Proc. of the 2007 International Conference on Intelligent and Advanced Systems(ICIAS), Kuala Lumpur, Nov. 2007, pp.1345-1349.
[16] 李慶銘,即時影音教學系統的實現。碩士論文,國立台灣科技大學,台北,台灣,2011。
[17] 黃敏卿,雙攝影機運輸監控系統之貨櫃車牌與櫃碼辨識。碩士論文,國立台灣科技大學,台北,台灣,2013。
[18] 陳詩杭,智慧型居家照護之安全警訊偵測系統。碩士論文,國立台灣科技大學,台北,台灣,2014。
[19] M. Seki, H. Fujiwara, and K. Sumi, “A robust background subtraction method for changing background,” in Proc. of the 2000 Fifth IEEE Workshop on Applications of Computer Vision, Palm Springs, CA, Dec. 2000, pp. 207-213.
[20] G.-Y. Liou, Study on vertual tying. Unpublished doctoral dissertation, National Taiwan University of Science and Technology, Taipei, Taiwan, 2014.
[21] R. Berri, D. Wolf, and F. Osorio, “Telepresence robot with image-based face tracking and 3D perception with human gesture interface using Kinect sensor”, in Proc. of the 2014 Joint Conference on SBR-LARS Robotics Symposium and Robocontrol (SBR LARS Robocontrol), Sao Carlos, Oct. 2014, pp. 205-210.
[22] P. Sirisha, C.-N. Raju, and R.-P.-K. Reddy, “An efficient fuzzy technique for detection of brain tumor,” International Journal of Computers and Tecnology, vol. 8, no. 2, pp. 813-818, June 2013.
[23] C.-H. Bindu and K.-S. Prasad, “An efficient medical image segmentation using conventional Otsu method,” International Journal of Advanced Science and Technology, vol. 38, pp.67-74, Jan. 2012.
[24] P. Gupta, V. Malik, and M. Gandhi, “Implementation of multilevel threshold method for digital images used in medical image processing,” International Journal of Advanced Research in Computer Science and Software Engineering, vol. 2, no. 2, Feb. 2012.
[25] R. Farrahi Moghaddam and M. Cheriet, “AdOtsu: An adaptive and parameterless generalization of Otsu’s method for document image binarization,” Pattern Recognition, vol. 45, no. 6, pp. 2419-2431, June 2012.
[26] Y. Zhang and L. Wu, “Fast document image binarization based on an improved adaptive Otsu’s method and destination word accumulation,” J Journal of Computational Information Systems, vol. 7, no. 6, pp. 1886-1892, June 2011.
[27] O. Nina, B. Morse, and W. Barrett, “A recursive Otsu thresholding method for scanned document binarization,” in Proc. of the 2011 IEEE Workshop on Applications of Computer Vision (WACV), Jan. 2011, pp. 307-314.
[28] X. Xu, S. Xu, L. Jin, and E. Song, “Characteristic analysis of Otsu threshold and its applications,” Pattern Recognition Letters, vol. 32, no. 7, pp. 956-961, May 2011.
[29] 李振偉,水下無人載具之視覺懸停控制。碩士論文,國立中山大學,高雄,台灣,2008。
[30] J. Gil and M. Werman, “Computing 2-D min, median, and max filters,” Pattern Analysis and Machine Intelligence, vol. 15, no. 5, pp. 504-507, May 1993.
[31] 鐘國亮,影像處理與電腦視覺第五版。台北,台灣,2008。
[32] R.-M. Haralick, S.-R. Sternberg, and X. Zhuang, “Image Analysis Using Mathematical Morphology,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 9, no. 4, pp. 532-550, Jan. 2009.
[33] X. Wang and W. Lai, “Edge detection for Chinese text image based on novel differential operator,” in Proc. of the 2010 International Conference on Computer and Information Application (ICCIA), Dec. 2010, pp. 44-47.
[34] C.-J. Liao, Vision-based hand gesture recognition system for users on wheelchairs. Unpublished doctoral dissertation, National Taiwan University of Science and Technology, Taipei, Taiwan, 2010.
[35] X. Chen and X. Liu, “A sub peak tracker based Hough transform for accurate and robust linear edge extraction,” in Proc. of the 2010 International Conference on Electrical and Control Engineering (ICECE), June. 2010, pp. 288-291.

QR CODE