簡易檢索 / 詳目顯示

研究生: 林彥佑
Yen-Yu Lin
論文名稱: 利用光學字符辨識技術所設計之手機版藥袋用藥資訊辨識系統
OCR-based Mobile Medication Prescription Bag Reader
指導教授: 鍾聖倫
Sheng-Luen Chung
口試委員: 葉正聖
none
賈叢林
none
李宜勳
none
曾元顯
none
郭重顯
none
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2015
畢業學年度: 103
語文別: 中文
論文頁數: 78
中文關鍵詞: OCR (光學字符辨識)銀髮族藥袋辨識手機程式 App
外文關鍵詞: Optical Characteristics Recognition, Medication adherence, context extraction
相關次數: 點閱:287下載:26
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

銀髮族常因記憶問題而忘記按時正確地服藥,無法達到預期用藥效果並且造成醫療浪費。考量國內醫院診所的藥袋上均有提供明確用藥資訊,本論文利用光學字符辨識技術所設計之手機版藥袋用藥資訊辨識系統,協助銀髮族達成用藥自我管理。相較其他少數用來提示銀髮族用藥的手機軟體需要以手寫或是鍵盤輸入的方式,本系統只需將藥袋透過手機上的鏡頭取像,即可解析出該批次藥袋中相關用藥資訊。而要達成此高親和力的自然輸入技術,其挑戰在於彙整進階影像處理、OCR 光學字符辨識、文意解析、與手機程式設計等技術。據此,本論文的主要貢獻是:(1) 調適在自然光照條件下所拍攝的藥袋圖片檔,使其易於後續文字辨識的處理;(2) 利用辨識不同醫院診所的藥袋,而於特定位置擷取與透過 OCR 解譯藥袋上:藥名、服用次數、時間等之文字資訊;(3) 利用前後文比對技術,透過衛福部提供的藥典與醫囑,驗證與校正 OCR 的辨識結果,提升辨識率;以及 (4) 實現手機上比現有同類軟體更方便銀髮族操作與記錄的解決方案。除了藥袋辨識應用之外,以上整合光學字符辨識與前後文文意解析的技術也可推廣到一般以文意解析為導向的影像應用。當作成果展現,我們以八家臺灣的醫院或診所開具的藥袋作驗證。在此基礎上,本系統也可擴展至記錄使用者的用藥記錄,當作醫師在開立處方箋時的參考,以達到銀髮族「用藥個人化、開藥集智化」的最適用藥狀況。


Critical to the effectiveness of medical treatment and allocation of medical resource, medication adherence is one of paramount importance for elder people, due to their tendency of suffering multiple chronicle illness and inevitable cognition impairment. This study proposes an OCR-based Mobile Srescription Bag Reader for Elders to enhance medication adherence. In contrast to conventional medication reminder designs which rely keypad or handwriting for the input interface, the proposed solution allows the elder to take pictures of prescriptions as most natural input. The picture that contains medication details is then processed by Optical Characteristics Recognition (OCR) to extract medication information for later automatic reminding notification. To this aim, several key techniques are utilized and adapted: image processing, OCR, context extraction, and mobile programming in tackling the following issues: (1) Preprocess of the prescription pictures, taken in general angles and lightening conditions, to facilitate subsequent OCR; (2) Extraction and decipherment of prescriptions from difference hospitals and clinics for information relating to names of medicine and regiment instruction from the prescription image; (3) Enhancement of OCR performance by context correction method that fits recognition results into correct vocabulary and contexts of medical prescriptions; (4) Design and implementation of an elder friendly Android APP that relies on picture taking for medical prescription recognition without posing too much constraint. In general, the aforementioned techniques of integrating OCR and context extraction technique developed can also be applied to more general context-oriented image applications. To demonstrate the validity of the proposed solution, medical prescriptions from eight hospitals are tested by our App. On top of that, further functions can be achieved, like prompting medication in-taking reminding and recording medication intaking history, which later can be used for subsequent prescriptions in achieving individual care and shared decision-making medications.

中文摘要 Abstract 致謝 目 圖錄目 表錄目 第一章 簡介 1.1 銀髮族用藥問題 1.2 國內外相關研究 1.3 研究目的 1.4 論文貢獻 1.5 論文架構 第二章 智慧型藥袋辨識與彙整系統 2.1 前言 2.2 操作流程 2.3 技術挑戰 2.4 文獻審閱 第三章 藥袋影像之前置處理:邊框、拉整與去光影 3.1 前言 3.2 前置處理 3.3 Canny 取得邊框 3.4 圖像拉整 3.5 Adaptive Thresholding 去除光影 3.6 成效展示 28 第四章 藥袋分類與資訊之擷取:分類與特定區塊之光學字符辨識(OCR) 4.1 前言 4.2 分類與擷取資訊 4.3 直方圖作藥袋分類 4.4 成份區塊列項(Component Block List)作重點資訊擷取 4.5 成效展示 第五章 辨識率之提升:前後文(Context)比對與校正 5.1 前言 5.2 前後文 5.3 處方(藥名劑量詞庫) 5.4 展示效果 第六章 系統設計實現與展示 6.1 前言 6.2 軟體架構 6.3 執行流程 6.4 提醒通知 6.5 執行效能分析 6.6 不同醫院診所藥袋之自動識別展示 第七章 結論與未來發展方向 7.1 辨識技術的提昇 7.2 用藥提示與彙整系統擴展功能 7.3 銀髮族用藥問題的改善 參考文獻 附錄 A

參考文獻
[1] 內政部,“第十次國民生命表平均餘命及生存數表,”,台北, 2014.
[2] 衛生福利部,“中華民國 102 年老人狀況調查報告,”,台北, 2014.
[3] 賴世偉,“老年人之臨床用藥原則,” 台灣老年醫學會會訊, 2003.
[4] S. Stegemann, J.-P. Baeyens, F. Cerreta, E. Chanie, A. Löfgren, M. Maio, et al., “Adherence measurement systems and technology for medications in older patient populations,” European Geriatric Medicine, vol. 3, pp. 254-260, 2012.
[5] A. Shaughnessy, “Common drug interactions in the elderly,” Emerg Med, vol. 24, p. 21, 1992.
[6] N. Col, J. E. Fanale, and P. Kronholm, “The role of medication noncompliance and adverse drug reactions in hospitalizations of the elderly,” Archives of internal medicine, vol. 150, pp. 841-845, 1990.
[7] T. L. Wang, L. C. Wu, Y. J. Huang, P. Lee, T. B. Chu, F. Chung-Jung, et al., “A model to personalize scheduling of complex prescriptions,” Computer methods and programs in
biomedicine, vol. 104, pp. 514-519, 2011.
[8] 邱屏人, “正確用藥 消除疾病沒煩惱, ” 全民健康保險雙月刊, 台北, 2012.
[9] J. Penge and P. Crome, “Appropriate prescribing in older people,” Reviews in Clinical Gerontology, vol. 24, pp. 58-77, 2014.
[10]R. Lin, P. Lai, W. Chen, C. Chin, and Y. Kuo, “Intelligent taking medicine reminding system,” in Orange Technologies (ICOT), 2013 International Conference on, 2013, pp. 39-42.
[11]J. Canny, “A computational approach to edge detection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, pp. 679-698, 1986.
[12]H. Peng, F. Long, Z. Chi, and W. C. Siu, “Document image template matching based on component block list,” Pattern Recognition Letters, vol. 22, pp. 1033-1042, 2001.
[13]R. Smith, “An overview of the Tesseract OCR engine,” in icdar, 2007, pp. 629-633.
[14]衛生福利部. (2014, Jul, 15). 醫師藥師查詢雲端藥歷,為民眾用藥把關[Online]. Available: http://www.mohw.gov.tw/CHT/Ministry/DM2_P.aspx?f_list_no=7&fod_list_no=4978&doc_
no=45748
[15]衛生福利部食品藥物管理署. (2015, Jul, 02). 藥品外觀資料集[Online]. Available:http://data.fda.gov.tw/frontsite/data/DataAction.do?method=doDetail&infoId=42
[16]C. McCall, B. Maynes, C. C. Zou, and N. J. Zhang, “RMAIS: RFID-based medication adherence intelligence system,” in Engineering in Medicine and Biology Society (EMBC), 63 2010 Annual International Conference of the IEEE, 2010, pp. 3768-3771.
[17]M. L. Lee and A. K. Dey, “Real-time feedback for improving medication taking,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014, pp.
2259-2268.
[18]L. Tang, X. Zhou, Z. Yu, Y. Liang, D. Zhang, and H. Ni, “MHS: a multimedia system for improving medication adherence in elderly care,” Systems Journal, IEEE, vol. 5, pp. 506-517, 2011.
[19]M. Vervloet, L. V. Dijk, D. Bakker, P. Souverein, J. Santen-Reestman, B. V. Vlijmen, et al., “Short-and long-term effects of real-time medication monitoring with short message service (SMS) reminders for missed doses on the refill adherence of people with Type 2 diabetes: evidence from a randomized controlled trial,” Diabetic Medicine, vol. 31, pp. 821-828, 2014.
[20]F. Haffey, R. R. W. Brady, and S. Maxwell, “Smartphone apps to support hospital prescribing and pharmacology education: A review of current provision,” British Journal of Clinical Pharmacology, vol. 77, pp. 31-38, 2014.
[21]A. M. Seelye, M. Schmitter-Edgecombe, B. Das, and D. J. Cook, “Application of cognitive rehabilitation theory to the development of smart prompting technologies,” Biomedical Engineering, IEEE Reviews in, vol. 5, pp. 29-44, 2012.
[22]N. Otsu, “Thershold selecion method from gray-level histograms,” IEEE Trans Syst Man Cybern, vol. SMC-9, pp. 62-66, 1979.
[23]D. Bradley and G. Roth, “Adaptive thresholding using the integral image,” Journal of graphics, gpu, and game tools, vol. 12, pp. 13-21, 2007.
[24]C. H. Chou, W. H. Lin, and F. Chang, “Learning to binarize document images,” in 3rd International Workshop on Camera-Based Document Analysis and Recognition, Barcelona, 2009.
[25]B. Epshtein, E. Ofek, and Y. Wexler, “Detecting text in natural scenes with stroke width transform,” in Proceedings of the IEEE Computer Society Conference on Computer Vision
and Pattern Recognition, 2010, pp. 2963-2970.
[26]H. Chen, S. S. Tsai, G. Schroth, D. M. Chen, R. Grzeszczuk, and B. Girod, “Robust text detection in natural images with edge-enhanced maximally stable extremal regions,” in Proceedings - International Conference on Image Processing, ICIP, 2011, pp. 2609-2612.
[27]J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image and Vision Computing, vol. 22, pp. 761-767, 2004. 64
[28]C. Yao, X. Bai, W. Liu, Y. Ma, and Z. Tu, “Detecting texts of arbitrary orientations in natural images,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2012, pp. 1083-1090.
[29]G. K. Sethi and R. K. Bawa, “Text Information Extraction: In Context of Indian Languages,”
in Advanced Materials Research, 2012, pp. 5012-5019.
[30]R. B. Alday and R. M. Pagayon, “MediPic: A mobile application for medical prescriptions,” in 4th International Conference on Information, Intelligence, Systems and Applications, IISA 2013, Piraeus-Athens, 2013, pp. 22-27.
[31]J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image and Vision Computing, vol. 22, pp. 761-767, 2004.
[32]A. S. Shaik, G. Hossain, and M. Yeasin, “Design, development and performance evaluation of reconfigured Mobile Android Phone for people who are blind or visually impaired,” in SIGDOC 2010 - Proceedings of the 28th ACM International Conference on Design of Communication, 2010, pp. 159-166.
[33]A. Miyata and F. Ko, “Document area identification for extending books without markers,” in Conference on Human Factors in Computing Systems - Proceedings, 2011, pp. 3189-3198.
[34]S. Fenz, J. Heurix, and T. Neubauer, “Recognition and pseudonymization of personal data in paper-based health records,” in Lecture Notes in Business Information Processing vol. 117 LNBIP, ed, 2012, pp. 153-164.
[35]S. Z. Zhou, S. O. Gilani, and S. Winkler, “Open source OCR framework using mobile devices,” in Proceedings of SPIE - The International Society for Optical Engineering, 2008.
[36]J. Tariq, U. Nauman, and M. U. Nam, “α-soft: An Eglish language OCR,” in ICCET 2010 - 2010 International Conference on Computer Engineering and Technology, Proceedings, 2010, pp. V1291-V1294.
[37]L. Lecerf and B. Chidlovskii, “Scalable feature extraction from noisy documents,” in Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, 2009, pp. 361-365.
[38]J. Jia, “Template based table document recognition,” in Applied Mechanics and Materials vol. 239-240, ed, 2013, pp. 932-935.
[39]I. Safonov and I. Kurilin, “Deskew for card image scanning,” in Proc. of GRAPHICON, 2011, pp. 42-45.
[40]I. Safonov, H. Lee, S. Kim, and D. Choi, “Intellectual two-sided card copy,” in Proc. of 65 GRAPHICON, 2011, pp. 38-41.
[41]W. Kai, J. Jianming, and W. Qingren, “High performance Chinese/english mixed OCR with character level language identification,” in Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, 2009, pp. 406-410.
[42]L. Mingzhu, S. Yuxiu, and D. Yinan, “Research on optimization segmentation algorithm for Chinese/English mixed character image in OCR,” in Proceedings - 2014 4th International Conference on Instrumentation and Measurement, Computer, Communication and Control, IMCCC 2014, 2014, pp. 764-769.
[43]Y. H. Tseng and Y. I. Lin, “Evaluation of fuzzy search, term suggestion, and term relevance feedback in an OPAC system,” Bulletin of the Library Association of China, vol. 61, pp. 103- 126, 1998

QR CODE