簡易檢索 / 詳目顯示

研究生: 蔡承益
Cheng-Yi Tsai
論文名稱: 智慧型牙刷設計與開發 基於機械學習暨趣味性情境設定
Design of Smart Tooth Brush for Analysis and Training of Dental Health with Machine Learning and Interesting Situation Setting
指導教授: 鄭正元
Jeng-Ywan Jeng
熊艾吉
Kumar Ajeet
高宜敏
Yi-Ming Kao
口試委員: 鄭正元
Jeng-Ywan Jeng
林上智
Shang-Chih Lin
高宜敏
Yi-Ming Kao
熊艾吉
Kumar Ajeet
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 112
中文關鍵詞: 智慧裝置電動牙刷機械學習YOLOv3人體工學口腔資訊收集遊戲化情境設定
外文關鍵詞: Smart device, Electric toothbrush, Machine learning, YOLOv3, Ergonomics, Oral information collection, Gamification situation setting
相關次數: 點閱:397下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

機械學習目前已經發展多年,且在現在科技發展下也日新月異,越來越多的應用(如生醫、通信)也逐漸在生活中體現,同時也成為目前較為主流的研究方向,並同時替工業4.0開創新的道路。
根據一份報告指出目前已經有超過3.5億人受到口腔疾病的影響,並且當前遭受COIVD-19的影響,使得現場就診變得更加困難,也使得遠端治療這樣的方式逐漸被口腔醫療體系重視,並且醫療資源分配不均勻,許多開發國家擁有完整完善的醫療體系,但礙於昂貴的價格與交通時間,通常等到情況較為嚴重才願意就醫,並且自己較難了解其口腔狀況,因此若是不就診,只能依靠自己的感受進行自我診斷,然而這樣準確度較為不準確並且口腔疾病在開始造成疼痛時已經屬於較為嚴重的時期,最終還是需要花費高額的治療費就診。
本研究旨在開發一項智慧型牙刷系統,有別於現在普遍之電動牙刷,取而代之的是加入攝影鏡頭已拍攝口腔環境,並利用所拍攝到的照片進行病灶辨識,該技術雖然在牙醫不是一項新技術,但通常只有大醫院或是可容納體積較大的機台的診所有辦法對口腔進行完整掃描,至於民生用品部分則是以單純的攝影機或是單純的牙刷較多,透過以上兩種工具的結合,並以較低成本的方式開發出具口腔掃描功能之牙刷,透過口腔各個部位的拍攝並記錄其口腔位置之資訊,在辨識病灶之後可以透過位置資訊回溯使得牙醫在診斷前即可透過影像辨識做出初步的判斷。
由於目前口腔疾病在當今社會上越來越常見,因此認為從小培養起正確的刷牙習慣是有必要的,而兒童並不一定喜歡刷牙,因此認為利用遊戲化的方式可以改善目前兒童刷牙意願的問題,在跳脫刷牙情境狀況下進行刷牙遊戲的設計。


Computer vision and its further development with each passing day have revolutionized the technology world and became the cutting-edge research to couple its branches such as artificial intelligence (AI) in almost every industry such as in biomedical, communications and digital manufacturing. This amalgam/implementation of AI and manufacturing gave the new directions for industry 4.0 which resulted in zero-carbon or green manufacturing with saving of energy.
The literature review confirmed that more than 350 million people have been a victim of oral diseases, and currently suffering from the impact of COIVD-19, which caused hinders seeing a doctor on the spot. So, the development of remote treatment methods is increasingly valued and imperative for the oral medical system. However, due to expensive prices and transportation delays, they usually wait until the situation gets critical before they are willing to seek medical treatment which in some cases is almost impossible to recover. Therefore, if you do not seek medical treatment, you can only rely on self-diagnosis or self-medication which is always considered inaccurate and cause a delay in the needful treatment, causing the condition to be more serious resulting in a delayed as well as high-cost treatment.
This research aims to develop a smart toothbrush, which is different from the existing electric toothbrush, equipped with a better system for visualization and detection of effected areas due to any oral disease. A photographic lens is added to capture the oral environment as well as the photos used for lesion identification. Although this technology is not common in practice in dentistry as it is a new technology, usually only big hospitals or medical centers can accommodate these due to the size of equipment that scans the oral cavity completely. The development of a smart toothbrush with an oral scanning function is a low-cost method that enables to record and capture images during tooth brushing. By analyzing the recorded information, cavities and effected areas can be detected with the exact location which can be sent to the dentist and seek his preliminary advice without physically visiting him in the center.
Since oral diseases are becoming more and more common around the globe, it is believed that it is necessary to adopt correct brushing habits from an early age. But it was noted kids find it laborious to brush their teeth. Therefore, it is believed that the use of gamification can improve the current problem of children's brushing willingness which was developed in the presented smart brush technology. The feature will not only make the teeth brushing interesting but also train the kids to brush in the recommended sequence for better health of teeth.

目錄 摘要 I ABSTRACT II 目錄 IV 表目錄 XIII 第一章 、緒論 1 1.1 前言 1 1.2 研究動機 2 1.3 實驗流程 3 1.4 論文架構 4 第二章 、文獻回顧 6 2.1 口腔清潔方式 6 2.1.1牙刷生產技術 7 2.1.2電動牙刷設計 8 2.2 智慧口腔生醫與牙齒資訊收集暨服務系統規畫(Planning for Smart Oral Biomedicine and Dental Information Collection Service System) 9 2.2.1 智慧口腔資訊收集規畫 9 2.2.2 牙齒清潔監測與口腔資訊收集系統 10 2.3人體工學外觀 16 2.4電腦視覺 17 2.4.1 GAN 19 2.4.2 擴增資料庫 20 2.4.3 YOLOv1 21 2.4.4 YOLOv2 23 2.4.5 YOLOv3 24 2.5引導式刷牙遊戲 26 第三章 、智能牙刷軟硬件設計及圖像識別應用於智能牙刷 28 3.1 口腔生理資訊收集設備的選擇 28 3.2 牙刷外觀與內部結構設計 32 3.2.1 牙刷外觀第一版 33 3.2.2 攝影角度與距離測試 36 3.2.3 鏡頭LED位置與亮度測試 38 3.2.4 實驗用外觀與內部結構設計 39 3.3 光線波長對口腔內攝影之影響 42 3.3.1 近紅外光鏡頭 43 3.3.2 自然光鏡頭 44 3.4 資料儲存方式 44 3.4.1 拍攝資訊儲存於SD卡 45 3.4.2 拍攝資訊儲存於雲端 46 3.5 拍攝影像收集與口腔病灶檢測 47 3.5.1. 拍攝影像預處理 48 3.5.2. YOLOv3 backbone 篩選 52 3.5.3. YOLOv3 使用及參數調整 54 3.6 趣味化刷牙 60 3.6.1. 刷牙技術 60 3.6.2. 趣味化刷牙遊戲開發 61 第四章 、實驗結果與討論 65 4.1口腔生理資訊擷取設備挑選結果 65 4.2牙刷外觀與內部結構設計 66 4.2.1. 鏡頭角度測試結果 66 4.2.2. 光線波長對口腔內攝影之影響 69 4.2.3. 鏡頭LED亮度測試 71 4.2.4. 外觀設計 76 4.3資料儲存方式 79 4.4 口腔病灶偵測結果 79 4.4.1 拍攝影像收集 80 4.4.2 口腔病灶測試 81 4.5趣味化刷牙遊戲評估 86 4.5.1 刷牙遊戲情境分析 86 4.5.2 遊戲結果與討論 88 第五章 、結論與未來展望 91 5.1 結論 91 5.2 未來展望 91 參考文獻 92 附錄1、專利申請(發明) 95

[1] N. Ranzan, F. W. M. G. Muniz, and C. K. Rösing, “Are bristle stiffness and bristle end-shape related to adverse effects on soft tissues during toothbrushing? A systematic review,” Int. Dent. J., vol. 69, no. 3, pp. 171–182, Jun. 2019, doi: 10.1111/IDJ.12421.
[2] “Toothbrush bristle density: relationship to plaque removal - PubMed.” https://pubmed.ncbi.nlm.nih.gov/2700641/ (accessed May 19, 2022).
[3] Hamza Azam, “Development and Research of Toothbrushing Quality Monitoring and Oral Data Collection System”.
[4] B. Das and A. K. Sengupta, “Industrial workstation design: A systematic ergonomics approach,” Appl. Ergon., vol. 27, no. 3, pp. 157–163, Jun. 1996, doi: 10.1016/0003-6870(96)00008-7.
[5] D. B. Chaffin, “Improving digital human modelling for proactive ergonomics in design,” https://doi.org/10.1080/00140130400029191, vol. 48, no. 5, pp. 478–491, Apr. 2007, doi: 10.1080/00140130400029191.
[6] P. Cunningham, M. Cord, and S. J. Delany, “Supervised Learning,” Cogn. Technol., pp. 21–49, 2008, doi: 10.1007/978-3-540-75171-7_2.
[7] H. B. Barlow, “Unsupervised Learning,” Neural Comput., vol. 1, no. 3, pp. 295–311, Sep. 1989, doi: 10.1162/NECO.1989.1.3.295.
[8] A. Goh and R. Vidal, “Clustering and dimensionality reduction on Riemannian manifolds,” 26th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR, 2008, doi: 10.1109/CVPR.2008.4587422.
[9] M. ANNETT, “A CLASSIFICATION OF HAND PREFERENCE BY ASSOCIATION ANALYSIS,” Br. J. Psychol., vol. 61, no. 3, pp. 303–321, Aug. 1970, doi: 10.1111/J.2044-8295.1970.TB01248.X.
[10] Y. R. and P. N. S. Xueheng Qiu, Le Zhang, “Ensemble Deep Learning for Regression and Time Series Forecasting.” https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7015739 (accessed May 20, 2022).
[11] Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, “Deep Learning-Based Classification of Hyperspectral Data,” 2014, doi: 10.1109/JSTARS.2014.2329330.
[12] M. Castrillón et al., “A comparison of face and facial feature detectors based on the Viola-Jones general object detection framework,” Mach. Vis. Appl., vol. 22, pp. 481–494, 2011, doi: 10.1007/s00138-010-0250-7.
[13] R. Girshick, “Fast R-CNN,” pp. 1440–1448.
[14] P. Purkait, C. Zhao, and C. Zach, “SPP-Net: Deep Absolute Pose Regression with Synthetic Views.”
[15] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection.” [Online]. Available: http://pjreddie.com/yolo/
[16] “Retina U-Net: Embarrassingly Simple Exploitation of Segmentation Supervision for Medical Object Detection.” https://proceedings.mlr.press/v116/jaeger20a (accessed May 20, 2022).
[17] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative Adversarial Networks: An Overview,” IEEE Signal Process. Mag., vol. 35, no. 1, pp. 53–65, Jan. 2018, doi: 10.1109/MSP.2017.2765202.
[18] X. Yi, E. Walia, and P. Babyn, “Generative adversarial network in medical imaging: A review,” Med. Image Anal., vol. 58, p. 101552, Dec. 2019, doi: 10.1016/J.MEDIA.2019.101552.
[19] M. Sailer and L. Homner, “The Gamification of Learning: a Meta-analysis,” Educ. Psychol. Rev., vol. 32, pp. 77–112, 2020, doi: 10.1007/s10648-019-09498-w.
[20] “Actionable-gamification,” p. 2018, 2018.
[21] K. HIRAOKA, “T迷路とスキナー箱における白ネズミの確率学習,” 動物心理学年報, vol. 23, no. 2, pp. 69–75, 1974, doi: 10.2502/JANIP1944.23.69.
[22] R. Ramakrishnaiah et al., “Applications of Raman Spectroscopy in Dentistry: Analysis of Tooth Structure,” http://dx.doi.org/10.1080/05704928.2014.986734, vol. 50, no. 4, pp. 332–350, Apr. 2015, doi: 10.1080/05704928.2014.986734.
[23] 姜冠宇, “引入Tailoring Health Communication模式設計與建立室內腳踏健身車運動環境— 以遊戲化元素提升運動意願.” http://ir.lib.pccu.edu.tw/handle/987654321/33473 (accessed Jun. 17, 2022).
[24] J. A. Gibson and A. B. Wade, “Plaque removal by the Bass and Roll brushing techniques.,” J. Periodontol., vol. 48, no. 8, pp. 456–459, Aug. 1977, doi: 10.1902/JOP.1977.48.8.456.
[25] A. R. Rajwani, S. Nancy, D. Hawes, A. To, A. Quaranta, and J. C. Rincon Aguilar, “Idea, experimental design, performed review, wrote manuscript. d Senior Lecturer in Periodontology,” Oral Heal. Prev Dent, vol. ##, pp. 843–854, doi: 10.3290/j.ohpd.a45354.
[26] W. Li, C. Chen, M. Zhang, H. Li, and Q. Du, “Data Augmentation for Hyperspectral Image Classification with Deep CNN,” IEEE Geosci. Remote Sens. Lett., vol. 16, no. 4, pp. 593–597, Apr. 2019, doi: 10.1109/LGRS.2018.2878773.

無法下載圖示 全文公開日期 2027/07/22 (校內網路)
全文公開日期 2027/07/22 (校外網路)
全文公開日期 2027/07/22 (國家圖書館:臺灣博碩士論文系統)
QR CODE