簡易檢索 / 詳目顯示

研究生: Tran Quoc Viet
Quoc-Viet Tran
論文名稱: Non-Contact Breath Motion Detection Using the Lucas-Kanade Algorithm
Non-Contact Breath Motion Detection Using the Lucas-Kanade Algorithm
指導教授: 蘇順豐
Shun-Feng Su
口試委員: 徐勝均
Sendren Sheng-Dong Xu
黃有評
Yo-Ping Huang
王偉彥
Wei-Yen Wang
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2016
畢業學年度: 104
語文別: 英文
論文頁數: 57
中文關鍵詞: Breath detectioninspiratory phaserespiratory phasedetect respirationLucas- Kanade Algorithmdeepest inhaling
外文關鍵詞: Breath detection, inspiratory phase, respiratory phase, detect respiration, Lucas- Kanade Algorithm, deepest inhaling
相關次數: 點閱:480下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • This study is about how to build a simple and economical system to detect breath in a real time fashion using a normal webcam or a Microsoft Kinect. By using the optical flow method, a novel method is proposed to detect the peak of the inspiratory phase of a breath from images so as to define a proper timing for trigging X-ray shooting. The issue of using images for breath detection is that the motion of features on the chest is very small when a breath occurs. Therefore, the Lucas- Kanade algorithm, which is originally proposed to detect heartbeat, is considered to track possible small motions on the chest. Image processing techniques such as the corner detection algorithm are developed to obtained useful features for the Lucas- Kanade algorithm. From the experimental conducts, with the proposed approach, the detection of the inspiratory-expiratory phase is significant enough. In other words, the breath motion can easily be observed. Different environments (lighting and dim lighting) and different distances are also tested in our implementation. It can be found that the features are quite robust and stable. For X-ray shooting, it is required to predict the deepest inhaling. In our experiments, the predictive error of the peak time is about 0.366 second, which roughly accounts for 7.32% of the averaging breath cycle. In fact, in our study, two methods (averaging of many previous cycles of breath and one previous cycle of breath) are used to predict the peak time of inhaling. For the short time and normal breath, the averaging method is very efficient to reduce the errors between predicted and actual peak time. However, this method may result in worse prediction if the time considered is long because the amplitudes and cycles of breath may vary over time. From our study, it can be concluded that the proposed approach can be effectively used to define a proper timing for trigging X-ray shooting.


    This study is about how to build a simple and economical system to detect breath in a real time fashion using a normal webcam or a Microsoft Kinect. By using the optical flow method, a novel method is proposed to detect the peak of the inspiratory phase of a breath from images so as to define a proper timing for trigging X-ray shooting. The issue of using images for breath detection is that the motion of features on the chest is very small when a breath occurs. Therefore, the Lucas- Kanade algorithm, which is originally proposed to detect heartbeat, is considered to track possible small motions on the chest. Image processing techniques such as the corner detection algorithm are developed to obtained useful features for the Lucas- Kanade algorithm. From the experimental conducts, with the proposed approach, the detection of the inspiratory-expiratory phase is significant enough. In other words, the breath motion can easily be observed. Different environments (lighting and dim lighting) and different distances are also tested in our implementation. It can be found that the features are quite robust and stable. For X-ray shooting, it is required to predict the deepest inhaling. In our experiments, the predictive error of the peak time is about 0.366 second, which roughly accounts for 7.32% of the averaging breath cycle. In fact, in our study, two methods (averaging of many previous cycles of breath and one previous cycle of breath) are used to predict the peak time of inhaling. For the short time and normal breath, the averaging method is very efficient to reduce the errors between predicted and actual peak time. However, this method may result in worse prediction if the time considered is long because the amplitudes and cycles of breath may vary over time. From our study, it can be concluded that the proposed approach can be effectively used to define a proper timing for trigging X-ray shooting.

    ACKNOWLEDGMENTS i ABSTRACT ii Chapter 1: INTRODUCTION 1 1.1. Background and Motivation 1 1.2. Thesis Contribution 2 1.3. Thesis Organization 3 Chapter 2: RELATED WORK 5 Chapter 3: RGB-D CAMERAS 12 3.1. Overview 12 3.1.1. RGB-D camera technologies 12 3.1.2. Microsoft Kinect X-Box 360 Specification 15 3.2. Calibration in RGB-D camera 17 3.2.1. Generation of 3D coordinates 17 3.2.2. Calibration of depth and color images 18 Chapter 4: BREATH DETECTION 21 4.1. System Overview 21 4.1.1. ddRCruzeTM series 21 4.1.2. Operational environment of the system 21 4.2. Breath detection method 23 4.2.1. Algorithm for breath detection 23 4.2.2. Background elimination 25 4.2.3. Registration– Alignment depth to color image 26 4.2.4. Creating mask for RGB image 27 4.2.5. Corner detection 28 4.2.6. Finding suitable regions to get signals by applying Lucas- Kanade algorithm 28 4.2.7. Getting signals of features on the chest of patient 34 4.2.8. Calculating inspiratory- expiratory phase, cycle of breath and predict the time of deepest inhaling 34 Chapter 5: EXPERIMENTAL RESULTS 39 5.1. Analysis of the fluctuated time between the actual and predicted peak time of inhaling 39 5.2. Analysis of the impact of number of feature points to processing time 41 5.3. Analysis of the impact of SID distance to the robustness of feature points 43 5.4. Analysis of the impact of illumination to the robustness of feature points 45 5.5. Analysis of the fluctuated time by getting many previous averaging cycles of breath 46 Chapter 6: CONCLUSIONS AND FUTURE WORK 52 6.1. Conclusions 52 6.2. Future work 53 REFERENCES 55

    [1] K. P. Cohen, J. G. Webster, J. Northern, Y. H. Hu, and W. J. Tompkins, "Breath detection using fuzzy sets and sensor fusion," in Proceedings of the 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 1994. Engineering Advances: New Opportunities for Biomedical Engineers., vol.2, pp. 1067-1068, 1994.
    [2] K. P. Cohen, Y. H. Hu, W. J. Tompkins, and J. G. Webster, "Breath detection using a fuzzy neural network and sensor fusion," in International Conference on Acoustics, Speech, and Signal Processing, ICASSP-95., vol.5, pp. 3491-3494, 1995.
    [3] D. Hanawa, T. Morimoto, S. Tearda, T. Sakai, S. Shimazaki, K. Igarashi, et al., "Nasal cavity detection in facial thermal image for non-contact measurement of breathing," in 35th International Conference on Telecommunications and Signal Processing (TSP), pp. 586-590, 2012.
    [4] D. Hanawa, T. Morimoto, S. Terada, T. Sakai, S. Shimazaki, K. Igarashi, et al., "Nose detection in far infrared image for non-contact measurement of breathing," in Proceedings of 2012 IEEE-EMBS International Conference on Biomedical and Health Informatics, pp. 878-881, 2012.
    [5] O. Yahya and M. Faezipour, "Automatic detection and classification of acoustic breathing cycles," in 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1), , pp. 1-5, 2014.
    [6] Y. W. Bai, W. T. Li, and Y. W. Chen, "Design and implementation of an embedded monitor system for detection of a patient's breath by double Webcams in the dark," in 2010 12th IEEE International Conference on e-Health Networking Applications and Services (Healthcom), pp. 93-98, 2010.
    [7] Y. W. Bai, W. T. Li, and Y. W. Chen, "Design and implementation of an embedded monitor system for detection of a patient's breath by double Webcams," in 2010 IEEE International Workshop on Medical Measurements and Applications Proceedings (MeMeA),pp. 171-176, 2010.
    [8] Y. W. Bai, Y. W. Chen, and W. T. Li, "Design of an embedded monitor system with a low-power laser projection for the detection of a patient's breath," in 2011 IEEE International Conference on Consumer Electronics (ICCE), pp. 553-554, 2011.
    [9] Y. W. Bai, W. T. Li, and C .H. Yeh, "Design and implementation of an embedded monitor system for body breath detection by using image processing methods," in 2010 Digest of Technical Papers International Conference on Consumer Electronics (ICCE), pp. 193-194, 2010.
    [10] H. Y. Wu, M. Rubinstein, E. Shih, J. Guttag, Fr, #233, et al., "Eulerian video magnification for revealing subtle changes in the world," ACM Trans. Graph., vol. 31, pp. 1-8, 2012.
    [11] G. Balakrishnan, F. Durand, and J. Guttag, "Detecting Pulse from Head Motions in Video," in 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3430-3437, 2013.
    [12] B. D. Lucas, "Generalized Image Matching by the Method of Differences," doctoral dissertation, tech. report , Robotics Institute, Carnegie Mellon University, July, 1984
    [13] J. Y. Bouguet, 'Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm' , Intel Corporation Microprocessor Research Labs, 2000.
    [14] B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," presented at the Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2, Vancouver, BC, Canada, 1981.
    [15] J. Y. Bouguet, " Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm" , Intel Corporation 5, pp. 1-10, 2001.
    [16] S. Jianbo and C. Tomasi, "Good features to track," in Computer Vision and Pattern Recognition, 1994. Proceedings CVPR '94., 1994 IEEE Computer Society Conference on, pp. 593-600, 1994.
    [17] Carlo Tomasi and Takeo Kanade, " Detection and Tracking of Point Features", Carnegie Mellon University Technical Report CMU-CS-91-132, April 1991.
    [18] J. Serra, Image Analysis and Mathematical Morphology: Academic Press, Inc., 1983.
    [19] Dilation morphology: https://en.wikipedia.org/wiki/Dilation_(morphology)
    [20] Align depth image to RGB image for Kinect camera: http://stackoverflow.com/questions/21849512/how-to-align-rgb-and-depth-image-of-kinect-in-opencv
    [21] Bumblebee camera: https://www.ptgrey.com/stereo-vision-cameras-systems
    [22] Time of Flight camera: http://en.wikipedia.org/wiki/Time-of-flight_camera
    [23] Structured light technology: http://en.wikipedia.org/wiki/Structured_light
    [24] D-Imager 3D sensing technology: http://www2.panasonic.biz/es/densetsu/device/3DImageSensor/en/product.html
    [25] pdm[Vision]® CamCube 3.0: http://www.pmdtec.com/news_media/video/camcube.php
    [26] Swiss Ranger SR4000: http://www.adept.net.au/cameras/Mesa/SR4000.shtml
    [27] Fotonic 3D smart camera: http://www.fotonic.com/
    [28] Kinect for Xbox 360: http://www.xbox.com/en-US/xbox-360/accessories/kinect
    [29] Kinect for Windows Sensor Components and Specifications: http://msdn.microsoft.com/en-us/library/jj131033.aspx
    [30] Kinect for Xbox 360 specification: http://support.personify.com/customer/portal/articles/1762770-microsoft---kinect-for-xbox-360---specs-and-availability
    [31] Asus Xtion Pro Live: https://www.asus.com/3D-Sensor/Xtion_PRO_LIVE/specifications/
    [32] Kinect for Xbox one: http://www.xbox.com/en-us/xbox-one/accessories/kinect-for-xbox-one
    [33] D. G. R. Bradski and A. Kaehler, Learning opencv, 1st edition: O'Reilly Media, Inc., 2008.

    無法下載圖示 全文公開日期 2021/07/19 (校內網路)
    全文公開日期 2026/07/19 (校外網路)
    全文公開日期 2026/07/19 (國家圖書館:臺灣博碩士論文系統)
    QR CODE