簡易檢索 / 詳目顯示

研究生: Quoc-Viet Tran
Quoc-Viet Tran
論文名稱: 基於影像分析的智能非侵入式生醫訊息檢測
Intelligent Non-Invasive Biomedical Signal Detection from Image Analysis
指導教授: 蘇順豐
Shun-Feng Su
口試委員: 李祖添
Tsu-Tian Lee
王文俊
Wen-June Wang
黃有評
Yo-Ping Huang
陳美勇
Mei-Yung Chen
蔡清池
Ching-Chih Tsai
徐勝均
Sheng-Dong Xu
郭重顯
Chung-Hsien Kuo
蘇順豐
Shun-Feng Su
學位類別: 博士
Doctor
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 139
中文關鍵詞: breath detectionheart rate monitoringremote photoplethysmographyvital signsbiomedical signalblood pressurepulse signaladaptive pulsatile planecamera-basedbeat per minuteLucas-Kanade
外文關鍵詞: breath detection, heart rate monitoring, remote photoplethysmography, vital signs, biomedical signal, blood pressure, pulse signal, adaptive pulsatile plane, camera-based, beat per minute, Lucas-Kanade
相關次數: 點閱:207下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Noncontact image-based vital signs detection is attracting considerable compared to contact-based approaches due to hygiene, robustness, and cost-effectiveness. It is possible to measure simultaneously multiple individuals to apply for various surveillance applications. Widely available noncontact image-based detection system helps to check the vital signs at home via a normal camera. Therefore, this dissertation aims to build a fully intelligent noninvasive biomedical signal detection from image analysis in terms of clinical scenarios. We derive from the Eulerian and Lagrangian perspectives to build a system to detect breathing rate, heart rate, and blood pressure values. The state-of-the-art object detection and instance segmentation algorithms, including Yolov3, Faster-RCNN, Deeplabv3+, etc, are regularly used to localize the interesting bounding boxes (chest, face, palm). Pyramidal Lucas-Kanade and remote photoplethysmography are two main techniques for extracting the motion signals (breath, pulse) and subtle color change induced by pulse, respectively. In addition, digital signal processing is implemented to remove undesired noises for obtaining a clean biosignal. From experiments conducted, our system can detect breathing rate, heart rate of multiple individuals in real-time at a long distance in terms of motion scenarios. The X-Ray shooting assistant system can work perfectly to detect the peak times of the inspiratory phase with the error ±1 respiration per minute (rpm). For the heart rate detection system, the proposed “Adaptive Pulsatile Plane” (APP) is quite robust and stable in fitness motion cases, dim-lighting environment and the long-distance up to 4-meter away without zooming in camera. Similar to the noninvasive blood pressure estimation system, the proposed deep learning model overcomes the dependence of the high-speed camera in previous works to satisfy two medical standards (British Hypertension Society and Association for the Advancement of Medical Instrumentations) in estimating Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) with the root mean squared error and mean absolute error for SBP/DBP are 7.942/7.912 mmHg and 6.556/6.372, respectively. The proposed approach estimates blood pressure reliably by only a normal webcam with 30 fps in a non-contact continuous manner. Thus, it can be concluded that our system can be applied to healthcare applications.


    Noncontact image-based vital signs detection is attracting considerable compared to contact-based approaches due to hygiene, robustness, and cost-effectiveness. It is possible to measure simultaneously multiple individuals to apply for various surveillance applications. Widely available noncontact image-based detection system helps to check the vital signs at home via a normal camera. Therefore, this dissertation aims to build a fully intelligent noninvasive biomedical signal detection from image analysis in terms of clinical scenarios. We derive from the Eulerian and Lagrangian perspectives to build a system to detect breathing rate, heart rate, and blood pressure values. The state-of-the-art object detection and instance segmentation algorithms, including Yolov3, Faster-RCNN, Deeplabv3+, etc, are regularly used to localize the interesting bounding boxes (chest, face, palm). Pyramidal Lucas-Kanade and remote photoplethysmography are two main techniques for extracting the motion signals (breath, pulse) and subtle color change induced by pulse, respectively. In addition, digital signal processing is implemented to remove undesired noises for obtaining a clean biosignal. From experiments conducted, our system can detect breathing rate, heart rate of multiple individuals in real-time at a long distance in terms of motion scenarios. The X-Ray shooting assistant system can work perfectly to detect the peak times of the inspiratory phase with the error ±1 respiration per minute (rpm). For the heart rate detection system, the proposed “Adaptive Pulsatile Plane” (APP) is quite robust and stable in fitness motion cases, dim-lighting environment and the long-distance up to 4-meter away without zooming in camera. Similar to the noninvasive blood pressure estimation system, the proposed deep learning model overcomes the dependence of the high-speed camera in previous works to satisfy two medical standards (British Hypertension Society and Association for the Advancement of Medical Instrumentations) in estimating Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) with the root mean squared error and mean absolute error for SBP/DBP are 7.942/7.912 mmHg and 6.556/6.372, respectively. The proposed approach estimates blood pressure reliably by only a normal webcam with 30 fps in a non-contact continuous manner. Thus, it can be concluded that our system can be applied to healthcare applications.

    Abstract i Acknowledgment iii Contents iv List of Figures vii List of Tables x List of Abbreviations xi Chapter 1. Introduction 1 1.1. Background 1 1.2. Motivation 3 1.3. Contributions 4 1.4. Dissertation Organization 5 Chapter 2. Theoretical Background 6 2.1. Object Detection 6 2.1.1. Conventional Approaches 6 2.1.2. Deep Learning Approaches 7 2.2. Lagrangian Technique 17 2.2.1. Optical Flow 17 2.2.2. Lucas-Kanade optical Flow 18 2.3. Eulerian Technique 21 2.3.1. Eulerian Video Magnification 21 2.3.2. Phase-Based Video Motion Magnification 21 2.3.3. Riesz Pyramids for Fast Phase-Based Video Magnification 22 Chapter 3. Eulerian-Lagrangian-based Visual Assistance for Chest X-ray Shooting 24 3.1. Introduction 24 3.2. Literature Review 26 3.3. System Description 28 3.4. Feature Extraction 30 3.4.1. Chest Localization 30 3.4.2. Pyramidal Lucas-Kanade Based Feature Extraction 31 3.5. Breath Extraction 33 3.5.1. Signal Processing 33 3.5.2. Peak Detection and Breath Information Extraction. 36 3.6. Riesz Pyramid based Breath Motion Magnification 38 3.6.1. Quaternion Representation of Riesz Pyramid 38 3.6.2. Filtering Quaternionic Phase 40 3.6.3. Amplification 41 3.7. Experimental Results 41 3.7.1. Evaluation of the Chest Detection 41 3.7.2. Processing Time Evaluation 42 3.7.3. Breath Estimation 44 3.8. Summary 45 Chapter 4. Computer-Aided Detection and Diagnosis System for Chest X-Ray Images 46 4.1. Introduction 46 4.2. Pneumonia Binary Classification 46 4.3. Multi-disease Classification 48 4.4. Lung Diseases Localization and Classification 51 4.5. Experimental results 51 4.5.1. Pneumonia classification 51 4.5.2. Multi-disease Classification 53 4.5.3. Disease Localization and Classification 54 4.6. Summary 55 Chapter 5. Adaptive Pulsatile Plane for Robust Noncontact Heart Rate Monitoring 57 5.1. Introduction 57 5.2. Literature Review 58 5.2.1. Skin Reflection Model 58 5.2.2. Existed rPPG Approaches 60 5.3. System Description 62 5.4. Skin Tone Extraction 64 5.5. Pulse Extraction and Heart Rate Estimation 66 5.5.1. Coordinate Transformation and Post-Processing 66 5.5.2. Heart Rate Estimation 71 5.6. Experimental Results 72 5.6.1. Computational Time 72 5.6.2. Skin Types 74 5.6.3. Lighting Conditions 74 5.6.4. Distances 76 5.6.5. Motion Types 77 5.7. Discussion 80 5.8. Summary 83 Chapter 6. Deep Learning-based Online Noninvasive Blood Pressure Monitoring From Video Analysis 84 6.1. Introduction 84 6.2. Literature Review 85 6.3. System Description 87 6.4. Face and Palm Detection 89 6.4.1. Single Shot Detector (SSD) 89 6.4.2. Faster R-CNN 90 6.4.3. Training and Evaluating Detection Models 91 6.5. Pulse Signal Extraction 92 6.6. Blood Pressure Estimation 94 6.6.1. Dataset 94 6.6.2. Design of the Proposed MLP Network 97 6.7. Experimental Results 99 6.7.1. Comparison of Different Window Sizes 99 6.7.2. BHS Standard Evaluation 100 6.7.3. AAMI Standard Evaluation 101 6.7.4. Processing Time Evaluation 102 6.7.5. Comparison With The Existing Approaches 105 6.8. Summary 105 Chapter 7. Conclusions and Future Work 107 References 109 Publication Lists 122

    [1]
    iBabyGuard (Infant Smart Mat). Accessed: 2019. [Online]. Available: www.ibabyguard.com
    [2]
    BreathResearch (A breath-monitoring headset). Accessed: 2019. [Online]. Available: www.breathresearch.com
    [3]
    Vivonoetics (Respiration: Single or Dual-Band). Accessed: 2019. [Online]. Available: www.vivonoetics.com
    [4]
    Q.-V. Tran, S.-F. Su, and V.-T. Nguyen, "Pyramidal Lucas-Kanade-based noncontact breath motion detection, " IEEE Trans. Syst., Man, Cybern., Syst., to be published.
    [5]
    Q. V. Tran, S. F. Su, C. C. Chuang, V. T. Nguyen, and N. Q. Nguyen, "Real-Time non-contact breath detection from video using Adaboost and Lucas-Kanade algorithm," in 2017 Joint 17th World Congr. Int. Fuzzy Syst. Assoc. and 9th Int. Conf. on Soft Computing and Intelligent Syst. (IFSA-SCIS), 2017, pp. 1-4.
    [6]
    Q. V. Tran, S. F. Su, and M. C. Chen, "Breath detection for enhancing quality of x-ray image," in Proc. IEEE Int. Conf. Syst. Sci. Eng. (ICSSE), 2016, pp. 1-4.
    [7]
    Y. Nishida, T. Hori, T. Suehiro, and S. Hirai, "Monitoring of breath sound under daily environment by ceiling dome microphone," in Proc. IEEE Int. Conf. Syst. Man. Cybern. (SMC), vol.3. Nashville, TN, USA, 2000, pp. 1822-1829.
    [8]
    Q.-V. Tran, S.-F. Su, W. Sun, and M.-Q. Tran, "Adaptive pulsatile plane for robust noncontact heart rate monitoring," IEEE Trans. Syst., Man, Cybern., Syst., to be published.
    [9]
    L. Tarassenko, M. Villarroel, A. Guazzi, J. Jorge, D. A. Clifton, and C. Pugh, "Non-contact video-based vital sign monitoring using ambient light and auto-regressive models," Physiological Measurement, vol. 35, no. 5, p. 807, 2014.
    [10]
    M. Lewandowska, J. Rumiński, T. Kocejko, and J. Nowak, "Measuring pulse rate with a webcam — A non-contact method for evaluating cardiac activity," in Proc. Federated Conf. Comput. Sci. Inf. Syst. (FedCSIS), 2011, pp. 405-410.
    [11]
    Q. V. Tran, S. F. Su, and M. Q. Tran, "Color distortion removal for heart rate monitoring in fitness scenario," in 20th Int. Conf. on Parallel and Distributed Computing, Appl., and Technologies, 2019, pp. 369-374.
    [12]
    B.-F. Wu, Y.-W. Chu, P.-W. Huang, and M.-L Chung, "Neural network based luminance variation resistant remote-photoplethysmography for driver’s heart rate monitoring," IEEE Access, vol. 7, pp. 57210-57225, 2019.
    [13]
    W. Li, B. Tan, and R. J. Piechocki, "Non-contact breathing detection using passive radar," in Proc. IEEE Int. Conf. Commun. (ICC), 2016, pp. 1–6.
    [14]
    Y.-L. Hsu, M.-C. Chen, C.-M. Cheng, and C.-H. Wu, "Development of a portable device for home monitoring of snoring," in Proc. IEEE Int. Conf. Syst. Man Cybern. (SMC), vol. 3. Waikoloa, HI, USA, 2005, pp. 2420–2424
    [15]
    M. Pieraccini, G. Luzi, D. Dei, L. Pieri, and C. Atzeni, "Detection of breathing and heartbeat through snow using a microwave transceiver," IEEE Geosci. Remote Sens. Lett., vol. 5, no. 1, pp. 57–59, Jan. 2008.
    [16]
    D. Hanawa et al., "Nasal cavity detection in facial thermal image for non-contact measurement of breathing," in Proc. 35th Int. Conf. Telecommun. Signal Process. (TSP), 2012, pp. 586–590.
    [17]
    D. Hanawa et al., "Nose detection in far infrared image for non-contact measurement of breathing," in Proc. IEEE-EMBS Int. Conf. Biomed. Health Inform., 2012, pp. 878–881.
    [18]
    J. Gwak, M. Shino, K. Ueda, and M. Kamata, "Effects of changes in the thermal factor on arousal level and thermal comfort," in Proc. IEEE Int. Conf. Syst. Man Cybern. (SMC), 2015, pp. 923–928.
    [19]
    S. L. Bennett, R. Goubran, and F. Knoefel, "The detection of breathing behavior using Eulerian-enhanced thermal video," in Proc. 37th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), 2015, pp. 7474–7477.
    [20]
    B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," in Proc. 7th Int. Joint Conf. Artif. Intell., vol. 2. Vancouver, BC, Canada, 1981, pp. 121–130.
    [21]
    H. Rahman, M. Ahmed, S. Begum, and P. Funk, "Real time heart rate monitoring from facial RGB color video using webcam," in Proc. 29th Annu. Workshop Swedish Artif. Intell. Soc. (SAIS), 2016, pp. 1–8.
    [22]
    G. D. Haan and V. Jeanne, "Robust pulse rate from chrominance-based rPPG," IEEE Trans. Biomed. Eng., vol. 60, no. 10, pp. 2878–2886, Oct. 2013.
    [23]
    U. Bal, "Non-contact estimation of heart rate and oxygen saturation using ambient light," Biomed. Opt. Express, vol. 6, no. 1, pp. 86–97, 2015.
    [24]
    W. Wang, A. C. D. Brinker, S. Stuijk, and G. D. Haan, "Algorithmic principles of remote PPG," IEEE Trans. Biomed. Eng., vol. 64, no. 7, pp. 1479–1491, Jul. 2017.
    [25]
    Z. Xu, J. Liu, X. Chen, Y. Wang, and Z. Zhao, "Continuous blood pressure estimation based on multiple parameters from eletrocardiogram and photoplethysmogram by Back-propagation neural network," Comput. in Industry, vol. 89, pp. 50-59, 2017.
    [26]
    H. Liu, K. Ivanov, Y. Wang, and L. Wang, "Toward a smartphone application for estimation of pulse transit time," Sensors, vol. 15, pp. 27303-27321, 2015.
    [27]
    Y. Heravi, M. Amin, M. Keivan, and J. Sima, "A new approach for blood pressure monitoring based on ECG and PPG signals by using artificial neural networks," Int. J. Comput. Appl., vol. 103, pp. 36-40, 2014.
    [28]
    X. Ding, B. P. Yan, Y. T. Zhang, J. Liu, N. Zhao, and H. K. Tsang, "Pulse transit time based continuous cuffless blood pressure estimation: A new extension and a comprehensive evaluation," Scientific Rep., vol. 7, p. 11554, 2017.
    [29]
    R. He, Z.-P. Huang, L.-Y. Ji, J.-K. Wu, H. Li, and Z.-Q. Zhang, "Beat-to-beat ambulatory blood pressure estimation based on random forest," in IEEE 13th Int. Conf. Wearable Implantable Body Sensor Networks (BSN), 2016, pp. 194-198.
    [30]
    N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, "Riesz pyramids for fast phase-based video magnification," in Proc. IEEE Int. Conf. Computational Photography (ICCP), 2014, pp. 1-10.
    [31]
    D. Kermany, K. Zhang, and M. Goldbaum, "Labeled optical coherence tomography (oct) and chest X-ray images for classification," Mendeley data, vol. 2, 2018.
    [32]
    X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, "Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases." in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, Hawaii, USA, 2017, pp. 2097-2106.
    [33]
    Z. Zhao, P. Zheng, S. Xu, and X. Wu, "Object detection with deep learning: A review," IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 11, pp. 3212-3232, 2019.
    [34]
    N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), San Diego, CA, USA, vol. 1, pp. 886-893, 2005.
    [35]
    R. Lienhart and J. Maydt, "An extended set of Haar-like features for rapid object detection," in Proc. IEEE Int. Conf. Image Processing (ICIP), Rochester, New York, USA, vol. 1, pp. I-I, 2002.
    [36]
    C. Cortes and V. Vapnik, "Support vector machine," Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.
    [37]
    Y. Freund and R. E. Schapire, "A desicion-theoretic generalization of on-line learning and an application to boosting," J. of Comput. & Sys. Sci., vol. 13, no. 5, pp. 663–671, 1997.
    [38]
    P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, "Object detection with discriminatively trained part-based models,"IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, pp. 1627–1645, 2010.
    [39]
    R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Columbus, Ohio, USA, 2014, pp. 580-587.
    [40]
    J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, "Selective search for object recognition," Int. J. of Comput. Vision, vol. 104, no. 2, pp. 154–171, 2013.
    [41]
    R. Girshick, "Fast r-cnn," in Proc. IEEE Conf. Comput. Vis. (ICCV), Santiago, Chile, 2015, pp. 1440-1448.
    [42]
    S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards realtime object detection with region proposal networks," in Proc. 29th Conf. Neural Inform. Process. Syst. (NIPS), 2015, pp. 91–99.
    [43]
    J. Hosang, R. Benenson, and B. Schiele, "Learning non-maximum suppression," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, Hawaii, USA, 2017, pp. 4507-4515.
    [44]
    J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, Nevada, USA, 2016, pp. 779-788.
    [45]
    S. Christian, L. Wei, J. Yangqing, S. Pierre, R. Scott, A. Dragomir, et al., "Going deeper with convolutions," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, Massachusetts, USA, 2015, pp. 1-9.
    [46]
    W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, et al., "Ssd: Single shot multibox detector," in Eur. Conf. Comput. Vis. (ECCV), Amsterdam, Neitherlands, 2016, pp. 21-37.
    [47]
    J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, et al., "Speed/accuracy trade-offs for modern convolutional object detectors," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, Hawaii, USA, 2017, pp. 3296-3297.
    [48]
    J. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, Hawaii, USA, 2017, pp. 7263-7271.
    [49]
    K. Simonyan and A. Zisserman, "Very deep convolutional networks for large scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
    [50]
    J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
    [51]
    J. Y. Bouguet, Pyramidal Implementation of the Affine Lucas Kanade Feature Tracker Description of the Algorithm, Intel Corporat., Santa Clara, CA, USA, 2001, pp. 1–10.
    [52]
    G. Balakrishnan, F. Durand, and J. Guttag, "Detecting pulse from head motions in video," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Portland, OR, USA, 2013, pp. 3430–3437.
    [53]
    J. J. Gibson, "The perception of the visual world," 1950.
    [54]
    B. D. Lucas, "Generalized image matching by the method of differences," Ph.D. dissertation, Robot. Inst., Carnegie Mellon Univ., Pittsburgh, PA, USA, Jul. 1984.
    [55]
    J. Y. Bouguet, Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the Algorithm, Res. Labs, Santa Clara, CA, USA, 2001.
    [56]
    H.-Y. Wu et al., "Eulerian video magnification for revealing subtle changes in the world, " ACM Trans. Graph., vol. 31, no. 4, pp. 1–8, 2012.
    [57]
    P. Burt and E. Adelson, "The Laplacian pyramid as a compact image code," IEEE Trans. Commun., vol. 31, pp. 532-540, 1983.
    [58]
    N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, "Phase-based video motion processing," ACM Trans. Graph., vol. 32, pp. 1-10, 2013.
    [59]
    E. P. Simoncelli and W. T. Freeman, "The steerable pyramid: A flexible architecture for multi-scale derivative computation," in Proc. Int. Conf. Image Process. (ICIP), 1995, pp. 444-447.
    [60]
    D. J. Fleet and A. D. Jepson, "Computation of component image velocity from local phase information," Int. J. Comput. Vision, vol. 5, pp. 77-104, 1990.
    [61]
    W. T. Freeman, E. H. Adelson, and D. J. Heeger, "Motion without movement," ACM Siggraph Comput. Graph., vol. 25, pp. 27-30, 1991.
    [62]
    N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, "Quaternionic representation of the Riesz pyramid for video magnification," Technical Report, MIT Comput. Sci. Artificial Intell. Laboratory, Cambridge, MA, USA, April 2014.
    [63]
    C. D. Katsis, N. Katertsidis, G. Ganiatsas, and D. I. Fotiadis, "Toward emotion recognition in car-racing drivers: A biosignal processing approach," IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 38, no. 3, pp. 502–512, May 2008.
    [64]
    C. M. Ionescu, J. A. T. Machado, and R. D. Keyser, "Analysis of the respiratory dynamics during normal breathing by means of pseudophase plots and pressure volume loops," IEEE Trans. Syst., Man, Cybern., Syst., vol. 43, no. 1, pp. 53–62, Jan. 2013.
    [65]
    K. P. Cohen, J. G. Webster, J. Northern, Y. H. Hu, and W. J. Tompkins, "Breath detection using fuzzy sets and sensor fusion," in Proc. 16th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. Eng. Adv. New Opportunities Biomed. Eng., vol. 2, 1994, pp. 1067–1068.
    [66]
    K. P. Cohen, Y. H. Hu, W. J. Tompkins, and J. G. Webster, "Breath using a fuzzy neural network and sensor fusion," in Proc. Int. Conf. Acoust. Speech Signal Process. (ICASSP), vol. 5. Detroit, MI, USA, 1995, pp. 3491–3494.
    [67]
    Y.-W. Liu and Y.-L. Hsu, "Development of a bed-centered telehealth system based on a motion-sensing mattress," in Proc. IEEE Int. Conf. Syst. Man Cybern. (SMC), Manchester, U.K., 2013, pp. 1466–1470.
    [68]
    S. Lokavee, T. Puntheeranurak, T. Kerdcharoen, N. Watthanwisuth, and A. Tuantranont, "Sensor pillow and bed sheet system: Unconstrained monitoring of respiration rate and posture movements during sleep," in Proc. IEEE Int. Conf. Syst. Man Cybern. (SMC), Seoul, South Korea, 2012, pp. 1564–1568.
    [69]
    P.-R. Zhan, S.-F. Su, and M.-C. Chen, "Foreground extraction method for X-ray imaging auxiliary system," New Trends Syst. Sci. Eng., vol. 276, pp. 137–145, 2015.
    [70]
    Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol.521, no. 7553, pp. 436–444, 2015.
    [71]
    T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, et al., "Microsoft coco: Common objects in context," in Eur. Conf. Comput. Vis. (ECCV), Zurich, Switzerland, 2014, pp. 740-755.
    [72]
    Tzutalin. LabelImg. (2015) [Online]. Available: https://github.com/tzutalin/labelImg.
    [73]
    J. Shi and C. Tomasi, "Good features to track," in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR). Seattle, WA, USA, 1994, pp. 593–600.
    [74]
    C. Tomasi and T. Kanade, "Detection and tracking of point features," Robot. Inst., Carnegie Mellon Univ., Pittsburgh, PA, USA, Rep. CMUCS-91-132, Apr. 1991.
    [75]
    Z.-Q. Zhang, L.-Y. Ji, Z.-P. Huang, and J.-K. Wu, "Adaptive information fusion for human upper limb movement estimation," IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 42, no. 5, pp. 1100–1108, Sep. 2012.
    [76]
    S. Polak, Y. Barniv, and Y. Baram, "Head motion anticipation for virtual environment applications using kinematics and EMG energy," IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 36, no. 3, pp. 569–576, May 2006.
    [77]
    J. García, A. Gardel, I. Bravo, J. L. Lázaro, and M. Martínez, "Tracking people motion based on extended condensation algorithm," IEEE Trans. Syst., Man, Cybern., Syst., vol. 43, no. 3, pp. 606–618, May 2013.
    [78]
    H. Zhao and R. Shibasaki, "A novel system for tracking pedestrians using multiple single-row laser-range scanners," IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 35, no. 2, pp. 283–291, Mar. 2005.
    [79]
    J. Lee, and S. Y. Shin, "General construction of time-domain filters for orientation data," IEEE Trans. Vis. Comput. Graphics, vol. 8, no. 2, pp. 119-128, 2002.
    [80]
    A. Neroladaki, D. Botsikas, S. Boudabbous, C. D. Becker, and X. Montet, "Computed tomography of the chest with model-based iterative reconstruction using a radiation exposure similar to chest X-ray examination: preliminary observations," Eur. Radiology, vol. 23, no. 2, pp. 360-366, 2013.
    [81]
    S. Jaeger, S. Candemir, S. Antani, Y.-X. J. Wáng, P.-X. Lu, and G. Thoma, "Two public chest X-ray datasets for computer-aided screening of pulmonary diseases," Quantitative Imaging in Medicine and Surgery, vol. 4, no. 6, pp. 475, 2014.
    [82]
    V. A. Caiulo, L. Gargani, S. Caiulo, A. Fisicaro, F. Moramarco, G. Latini, and E. Picano, "Lung ultrasound in bronchiolitis: comparison with chest X-ray, " Eur. J. pediatrics, vol. 170, no. 11, pp. 1427, 2011.
    [83]
    P. P. Kanjilal, R. R. Gonzalez, and D. S. Moran, "Weighted singular value distribution of RRI series applied to the characterization of heat intolerance in humans," IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 36, no. 4, pp. 621–630, Jul. 2006.
    [84]
    K. A. Sidek, I. Khalil, and H. F. Jelinek, "ECG biometric with abnormal cardiac conditions in remote monitoring system," IEEE Trans. Syst., Man, Cybern., Syst., vol. 44, no. 11, pp. 1498–1509, Nov. 2014.
    [85]
    C.-E. Tseng, J.-Y. Yen, M.-W. Chang, W.-C. Chang, and C.-K. Lee, "Modified frequency-partitioned spectrum estimation for a wireless health advanced monitoring bio-diagnosis system," IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 40, no. 3, pp. 611–622, May 2010.
    [86]
    Y. Kurihara and K. Watanabe, "Sleep-stage decision algorithm by using heartbeat and body-movement signals," IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 42, no. 6, pp. 1450–1459, Nov. 2012.
    [87]
    E. Sejdic and J. Jiang, "Selective regional correlation for pattern recognition," IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 37, no. 1, pp. 82–93, Jan. 2007.
    [88]
    J. Paalasmaa, H. Toivonen, and M. Partinen, "Adaptive heartbeat modeling for beat-to-beat heart rate measurement in ballistocardiograms," IEEE J. Biomed. Health Inform., vol. 19, no. 6, pp. 1945–1952, Nov. 2015.
    [89]
    D. Jarchi and A. J. Casson, "Towards photoplethysmography-based estimation of instantaneous heart rate during physical activity," IEEE Trans. Biomed. Eng., vol. 64, no. 9, pp. 2042–2053, Sep. 2017.
    [90]
    R.-G. Lee, C.-C. Hsiao, C.-Y. Chen, and R. Lin, "Heart rate monitoring systems in groups for assessment of cardiorespiratory fitness analysis," in Proc. IEEE Int. Conf. Syst. Man Cybern. (SMC), 2015, pp. 1145–1150.
    [91]
    B.-F. Wu, C.-H. Lin, P.-W. Huang, T.-M. Lin, and M.-L. Chung, "A contactless sport training monitor based on facial expression and remotePPG," in Proc. IEEE Int. Conf. Syst. Man Cybern. (SMC), Banff, AB, Canada, 2017, pp. 846–851.
    [92]
    P. Rouast, M. Adam, V. Dorner, and E. Lux, "Remote photoplethysmography: Evaluation of contactless heart rate measurement in an information systems setting," in Proc. Appl. Inform. Technol. Innov. Conf., 2016, p. 1.
    [93]
    W. Verkruysse, L. O. Svaasand, and J. S. Nelson, "Remote plethysmographic imaging using ambient light," Opt. Express, vol. 16, no. 26, pp. 21434–21445, 2008.
    [94]
    L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, "Encoder– decoder with atrous separable convolution for semantic image segmentation," in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 833–851.
    [95]
    R. Min, N. Kose, and J.-L. Dugelay, "KinectFaceDB: A kinect database for face recognition," IEEE Trans. Syst., Man, Cybern., Syst., vol. 44, no. 11, pp. 1534–1548, Nov. 2014.
    [96]
    P. Viola and M. Jones, "Rapid object detection using a boosted cascade of simple features," in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 1, 2001, pp. 511–518.
    [97]
    T. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive growing of GANs for improved quality, stability, and variation," in Proc. Int. Conf. Learn. Represent. (ICLR), 2018, pp. 1–26.
    [98]
    S. Baktash, M. Forouzanfar, I. Batkin, M. Bolic, V. Z. Groza, S. Ahmad, et al., "Characteristic ratio-independent arterial stiffness-based blood pressure estimation," IEEE J. Biomed. Health Inform., vol. 21, pp. 1263-1270, 2016.
    [99]
    E. Balestrieri and S. Rapuano, "Instruments and methods for calibration of oscillometric blood pressure measurement devices," IEEE Trans. Instrum. Meas., vol. 59, pp. 2391-2404, 2010.
    [100]
    J. A. D. L. O. Serna, W. V. Moer, and K. Barbé, "Using alternating Kalman filtering to analyze oscillometric blood pressure waveforms," IEEE Trans. Instrum. Meas., vol. 62, pp. 2621-2628, 2013.
    [101]
    S. Lee and J.-H. Chang, "Oscillometric blood pressure estimation based on deep learning," IEEE Trans. Ind. Informat., vol. 13, pp. 461-472, 2016.
    [102]
    T. Pickering, J. Hall, L. Appel, B. Falkner, J. Graves, M. Hill, et al., "Subcommittee of professional and public education of the American heart association council on high blood pressure research. Recommendations for blood pressure measurement in humans and experimental animals: Part 1: blood pressure measurement in humans: a statement for professionals from the subcommittee of professional and public education of the American heart association council on high blood pressure research," Hypertension, vol. 45, pp. 142-161, 2005.
    [103]
    M. Ursino and C. Cristalli, "A mathematical study of some biomechanical factors affecting the oscillometric blood pressure measurement," IEEE Trans. Biomed. Eng., vol. 43, pp. 761-778, 1996.
    [104]
    M. Jain, S. Deb, and A. V. Subramanyam, "Face video based touchless blood pressure and heart rate estimation," in IEEE 18th Int. Workshop Multimedia Signal Process. (MMSP), 2016, pp. 1-5.
    [105]
    I. C. Jeong and J. Finkelstein, "Introducing contactless blood pressure assessment using a high speed video camera," J. Medical Syst., vol. 40, p. 77, 2016.
    [106]
    P.-W. Huang, C.-H. Lin, M.-L. Chung, T.-M. Lin, and B.-F. Wu, "Image based contactless blood pressure assessment using pulse transit time," in 2017 Int. Automat. Control Conf. (CACS), 2017, pp. 1-6.
    [107]
    X. Fan, Q. Ye, X. Yang, and S. D. Choudhury, "Robust blood pressure estimation using an RGB camera," J. Ambient Intell. Humanized Computing, pp. 1-8, 2018.
    [108]
    A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, et al., "Physiobank, physiotoolkit, and physionet," Circulation, vol. 101, pp. e215-e220, 2000.
    [109]
    R. Rothe, M. Guillaumin, and L. V. Gool, "Non-maximum suppression for object detection by passing messages between windows," in Asian Conf. Comput. Vis., 2014, pp. 290-306.
    [110]
    M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, "The pascal visual object classes challenge: A retrospective," Int. J. Comput. Vis., vol. 111, pp. 98-136, 2015.
    [111]
    D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui, "Visual object tracking using adaptive correlation filters," in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2010, pp. 2544–2550.
    [112]
    M. P. Tarvainen, P. O. Ranta-Aho, and P. A. Karjalainen, "An advanced detrending method with application to HRV analysis," IEEE Trans. Biomed. Eng., vol. 49, pp. 172-175, 2002.
    [113]
    C. V. D. Malsburg, "Frank Rosenblatt: Principles of neurodynamics: perceptrons and the theory of brain mechanisms," Brain Theory, pp. 245-248, 1986.
    [114]
    T. Norman, "11 - Information technology systems infrastructure," in Integrated Security Syst. Des., T. Norman, 2nd ed. Boston, MA, USA: Butterworth-Heinemann, 2014, pp. 203-249.
    [115]
    O. Yahya and M. Faezipour, "Automatic detection and classification of acoustic breathing cycles," in Proc. Zone 1st Conf. Amer. Soc. Eng. Educ. (ASEE Zone), Bridgeport, CT, USA, 2014, pp. 1–5.
    [116]
    Y.-W. Bai, W.-T. Li, and Y.-W. Chen, "Design and implementation of an
    embedded monitor system for detection of a patient’s breath by double
    webcams in the dark," in Proc. 12th IEEE Int. Conf. e-Health Netw.
    Appl. Services (Healthcom), 2010, pp. 93–98.
    [117]
    Y.-W. Bai, Y.-W. Chen, and W.-T. Li, "Design of an embedded monitor system with a low-power laser projection for the detection of a patient's breath," in Proc. IEEE Int. Conf. Consumer Electron. (ICCE), 2011, pp. 553-554.
    [118]
    E. Ayan and H.-M. Ünver, "Diagnosis of pneumonia from chest x-ray images using deep learning," in Scientific Meeting Elec. Electron. Biomedical Eng. Comput. Sci. (EBBT), 2019, pp. 1-5.

    無法下載圖示 全文公開日期 2025/05/11 (校內網路)
    全文公開日期 2025/05/11 (校外網路)
    全文公開日期 2025/05/11 (國家圖書館:臺灣博碩士論文系統)
    QR CODE