簡易檢索 / 詳目顯示

研究生: 謝孟哲
Meng-Che Hsieh
論文名稱: 應用於精神病院病房行為觀測之全自動影像分析: 起床偵測、病人追蹤、身分辨識
Automatic Video Analysis of Get-Up Detection, Human Tracking and Recognition in application to Patient Behavior Monitoring in Psychiatric Wards.
指導教授: 王靖維
Ching-Wei Wang
口試委員: 陳中明
Chung-Ming Chen
黃忠偉
Jong-Woei Whan
楊順聰
Shuenn-Tsong Young
孫永年
Yung-Nien Sun
學位類別: 碩士
Master
系所名稱: 應用科技學院 - 醫學工程研究所
Graduate Institute of Biomedical Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 73
中文關鍵詞: 移動偵測物件追蹤物件辯識監控技術
外文關鍵詞: Motion Detection, Object Tracking, Object Recognition, Monitoring and Surveillance Technologies
相關次數: 點閱:440下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,因為電腦、數位攝像設備及網路快速發展與普及,使得視頻與影音串流技術不斷進步,同時也讓影片的分析與延伸技術被廣大應用,其中,移動偵測就是電腦視覺的應用中相當廣泛的一項議題。
    本論文主要使用移動偵測技術開發出一套可適用於網路攝像機並自動追蹤病人在精神病房內行動的演算法。網路攝像機,依其拍攝環境可以產生一般可見光影像與夜視紅外線影像;其中,夜視紅外線影像除了產生的灰階影像相較於可見光彩色影像資訊量較少之外,其本身在強度變化與解析度上,都弱於可見光影像,也因此導致不可見紅外線影像在開發追蹤演算法上會遇到較多的瓶頸,如: 物件因灰階強度與背景過於相近而無法有效擷取出來、邊緣較模糊不易偵測輪廓等。
    本研究透過人類物件在影像序列中移動的特徵,設計出偵測病人起床、移動、休息等狀態的移動偵測技術;在可見光影像部份,採用尋找鄰近相似移動物件的方式進行物件追蹤。而即便不可見紅外線影像有諸多限制,本研究透過基於內容的物件追蹤方法(Content-based),ROI設置、 cost 運算以及權重設置等方式開發出對應的演算法,可以穩定的在不可見紅外線影像中進行物件的偵測與追蹤。


    In recent years, due to rapid development of computer device and the popularization of Internet, the video and audio streaming technology are progressed constantly. At the same time, the technology of video analysis is also being applied into computer vision application. One of the most popular issue in the computer vision is motion detection.
    In this thesis, the motion detection technology which utilized web camera are proposed to automatically track patient movement in the psychiatric ward. Depend on the environment, web camera are capable to produce both visible light images and invisible infrared night vision images. However, developing of object tracking algorithm in the infrared image are more challenging than visible light image because the grayscale image produced by night vision invisible infrared image contains less color information comparing with the visible light images. Beside, its resolution and intensity level are also worse than visible light image.
    Therefore, in these research, the motion tracking technique for analyzing the moving feature of the human object in the image sequence is proposed. The algorithm analyzes and detects the human movement into three events, including wake up, moving, and rest event. In the case of visible light image sequences, the system tracks the object by finding the similar neighborhood of the moving object. While in the infrared image, due to its limitation, the object are tracked based on the multiple parameters, including object tracking method, region of interest (ROI) setting, cost calculation, and the weight calculation, and so on. The experimental result shows that this method are capable to detect and track the moving object both in visible light and invisible infrared image sequences accurately.

    摘要 I ABSTRACT II 致謝 III 目錄 IV 圖目錄 V 第一章緒論 1 1.1 研究動機 3 1.2 研究目標 5 1.3 研究貢獻 6 1.4 論文架構 10 第二章研究背景 11 2.1 移動偵測、追蹤與辨識相關研究 11 2.2 人類移動偵測、追蹤與辨識相關研究 13 第三章研究方法與實驗 16 3.1 移動偵測(Motion detection) 16 3.2 物件偵測(Object detection) 18 3.3 物件追蹤(Object tracking) 26 3.3.1 可見光影像物件追蹤 26 3.3.2 不可見紅外線影像物件追蹤 31 3.3.3 物件狀態(Object status) 43 3.4 物件辨識(Object identification) 45 第四章實驗結果 52 4.1 可見光影片實驗結果 52 4.2 不可見紅外線影片實驗結果 63 第五章結論與未來展望 69 5.1 結論 69 5.2 未來展望 70 參考文獻 71

    1. R. H. Peña-González and M. A. Nuño-Maganda, “Computer vision based real-time vehicle tracking and classification system,” in 2014 IEEE 57th International Midwest Symposium on Circuits and Systems (MWSCAS), 2014, pp. 679–682.
    2. D. Norpel, S. Dalaikhuu, and K. Tseveenjav, “Traffic Surveillance System Based on Computer Vision and its Application,” in 2014 7th International Conference on Ubi-Media Computing and Workshops (UMEDIA), 2014, pp. 101–104.
    3. Y. M. Tsai, C. C. Tsai, K. Y. Huang, and L. G. Chen, “An intelligent vision-based vehicle detection and tracking system for automotive applications,” in 2011 IEEE International Conference on Consumer Electronics (ICCE), 2011, pp. 113–114.
    4. A. F. Hurtado, J. A. Gómez, V. M. Peñeñory, I. M. Cabezas, and F. E. García, “Proposal of a Computer Vision System to Detect and Track Vehicles in Real Time Using an Embedded Platform Enabled with a Graphical Processing Unit,” in 2015 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE), 2015, pp. 76–80.
    5. S. Bayoumi, E. AlSobky, M. Almohsin, M. Altwaim, M. Alkaldi, and M. Alkahtani, “A Real-Time Fire Detection and Notification System Based on Computer Vision,” in 2013 International Conference on IT Convergence and Security (ICITCS), 2013, pp. 1–4.
    6. C. Ha, G. Jeon, and J. Jeong, “Vision-Based Smoke Detection Algorithm for Early Fire Recognition in Digital Video Recording System,” in 2011 Seventh International Conference on Signal-Image Technology and Internet-Based Systems (SITIS), 2011, pp. 209–212.
    7. C.-B. Liu and N. Ahuja, “Vision based fire detection,” in Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, 2004, vol. 4, p. 134–137 Vol.4.
    8. H. D. Duong and D. T. Tinh, “An efficient method for vision-based fire detection using SVM classification,” in 2013 International Conference of Soft Computing and Pattern Recognition (SoCPaR), 2013, pp. 190–195.
    9. A. Dey, “A contour based procedure for face detection and tracking from video,” in 2016 3rd International Conference on Recent Advances in Information Technology (RAIT), 2016, pp. 483–488.
    10. O. M. Parkhi, K. Simonyan, A. Vedaldi, and A. Zisserman, “A Compact and Discriminative Face Track Descriptor,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1693–1700.
    11. Y. Lu, Y. Wang, X. Tong, Z. Zhao, H. Jia, and J. Kong, “Face Tracking in Video Sequences Using Particle Filter Based on Skin Color Model and Facial Contour,” in Second International Symposium on Intelligent Information Technology Application, 2008. IITA ’08, 2008, vol. 1, pp. 457–461.
    12. R. C. Verma, C. Schmid, and K. Mikolajczyk, “Face detection and tracking in a video by propagating detection probabilities,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1215–1228, Oct. 2003.
    13. Z. Cai, Z. L. Yu, H. Liu, and K. Zhang, “Counting people in crowded scenes by video analyzing,” in 2014 9th IEEE Conference on Industrial Electronics and Applications, 2014, pp. 1841–1845.
    14. K. Venkateshan, A. Shekar, and S. Saha, “Baseball hand tracking from monocular video,” in 2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI), 2013, pp. 953–961.
    15. N. Sekhar Hore, “A Motion-Detection Biology-Based Learning Game for Children,” IEEE Potentials, vol. 33, no. 6, pp. 31–36, Nov. 2014.
    16. A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257–267, Mar. 2001.
    17. A. D. Wilson and A. F. Bobick, “Hidden markov models for modeling and recognizing gesture under variation,” Int. J. Patt. Recogn. Artif. Intell., vol. 15, no. 1, pp. 123–160, Feb. 2001.
    18. R. Hamid, S. Maddi, A. Johnson, A. Bobick, I. Essa, and C. Isbell, “A novel sequence representation for unsupervised analysis of human activities,” Artificial Intelligence, vol. 173, no. 14, pp. 1221–1244, Sep. 2009.
    19. Ivo Everts, Jan C. van Gemert and Theo Gevers, “Evaluation of Color Spatio-Temporal Interest Points for Human Action Recognition,” IEEE Transactions on Image Processing, vol.23, no.4, pp. 1569–1580, April.2014.
    20. J. Gall, A. Yao, N. Razavi, L. V. Gool, and V. Lempitsky, “Hough Forests for Object Detection, Tracking, and Action Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2188–2202, Nov. 2011.
    21. M. Ramanathan, W. Y. Yau, and E. K. Teoh, “Human Action Recognition With Video Data: Research and Evaluation Challenges,” IEEE Transactions on Human-Machine Systems, vol. 44, no. 5, pp. 650–663, Oct. 2014.
    22. R. Poppe, “A survey on vision-based human action recognition,” Image and Vision Computing, vol. 28, no. 6, pp. 976–990, Jun. 2010.
    23. V. Bloom, D. Makris, and V. Argyriou, “G3D: A gaming action dataset and real time action recognition evaluation framework,” in 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2012, pp. 7–12.
    24. 健保署(2016)。“104年全民健康保險特約醫院於VPN登錄之各月份護病比”。 2016年7月25日,http://data.nhi.gov.tw/DataSets/DataSetResource.ashx?rId=A21030000I-D00001-006
    25. 醫勞盟(2015)。“2015年全臺護病比大調查”。2016年7月10日,https://dl.dropboxusercontent.com/u/68720971/Documents/醫勞盟記者會20150511.pdf

    無法下載圖示 全文公開日期 2021/08/22 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE