簡易檢索 / 詳目顯示

研究生: 陳建樺
Chien-Hua Chen
論文名稱: 設計智慧手套實現即時手語翻譯系統
Smart Glove Design for Real Time Sign Language Interpretation System Implementation
指導教授: 姚智原
Chih-Yuan Yao
口試委員: 莊永裕
Yung-Yu Chuang
戴文凱
Wen-Kai Tai
賴祐吉
Yu-Chi Lai
朱宏國
Hung-Kuo Chu
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 44
中文關鍵詞: 手語辨識穿戴式裝置動作檢索
外文關鍵詞: Sign recognition, Wearable device, Motion retrieval
相關次數: 點閱:346下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文中提出一套手語辨識系統。本系統使用自行製作的資料手套將手語手勢進行分析合,並以資料檢索的方式計算動作的連續程度,最後組合成相對應的手語詞彙。穿戴上資料手套後,使用者可以進行手語手勢比劃,資料手套將感測器訊號轉換為數位資訊,透過無線傳輸至終端裝置後,與資料庫進行搜尋,比對出最相似的連續動作。當手勢辨識成功後,辨識結果將以文字及語音兩種型式表達與對方。


Our approach relies on a sign capturing device and a sign recognition algorithm. We design a pair of data gloves. Based on the feature input, a database can be constructed for each sign word. For the recognition algorithm, we treat each sign as a composition of continuous motion points. When signing, the data gloves captures hand motions and transmits features to the interpretation system through wireless protocol. If a result is recognized successfully, both text and voice of the sign can be represented to the user and the other side.

中文摘要 ...................... i Abstract ...................... ii 目錄 .......................... iii 表目錄 ........................ v 圖目錄 ........................ vi 符號說明 ...................... viii 第一章 緒論 ................... 1 第二章 相關背景與研究 ......... 5 第三章 資料手套硬體設計 ....... 13 第四章 系統概要 ............... 18 第五章 手勢擷取與辨識演算法 ... 19 第六章 實驗結果與討論 ......... 35 第七章 結論與未來展望 ......... 39 參考文獻 ...................... 42

[1] 史文漢、丁立芬. 手能生橋. 中華民國聾人協會, 1997.
[2] 姚俊英. 台灣手語演進. NTNU 特殊教育學系, 2001.
[3] 蔡素娟、戴浩一、陳怡君. 【台灣手語線上辭典】第三版中文版, 2015.
[4] 5DT. 5dt data glove 14 ultra, 2011.
[5] Microsoft Corp. Kinect for xbox 360, 2010.
[6] W.C. Stokoe Jr. Sign language structure: An outline of the visual communication systems of the american deaf. Journal of deaf studies and deaf education, 10(1):3–37, 2005.
[7] R.E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):35–45, 1960.
[8] G.C. Lee, F.H. Yeh, and Y.H. Hsiao. Kinect-based taiwanese sign-language recognition system. Multimedia Tools Appl., 75(1):261–279, 2016.
[9] Y.H. Lee and C.Y. Tsai. Taiwan sign language (tsl) recognition based on 3d data and neural networks. Expert Syst. Appl., 36(2):1123–1128, 2009.
[10] R.-H. Liang and M. Ouhyoung. A real-time continuous gesture recognition system for sign language. In Proceedings of the 3rd. International Conference on Face & Gesture Recognition, FG ’98, pages 558–565, 1998.
[11] R. Mahony, T. Hamel, and J.-M. Pflimlin. Nonlinear complementary filters on the special orthogonal group. IEEE Transactions,Automatic Control, 53(5):1203–1218, 2008.
[12] E. Ohn-Bar and M. M. Trivedi. Hand gesture recognition in real time for automotive interfaces: A multimodal vision-based approach and evaluations. IEEE Transactions on Intelligent Transportation Systems, 15(6):2368–2377, 2014.
[13] I. Oikonomidis, N. Kyriazis, and A.A. Argyros. Efficient model-based 3d tracking of hand articulations using kinect. In Proceedings of the British Machine Vision Conference, BMVA 2011, page 3, 2011.
[14] I. Oikonomidis, N. Kyriazis, and A.A. Argyros. Tracking the articulated motion of two strongly interacting hands. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012, pages 1862–1869, 2012.
[15] I. Oikonomidis, M.I.A. Lourakis, and A.A. Argyros. Evolutionary quasi-random search for hand articulations tracking. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’14, pages 3422–3429, 2014.
[16] J. Tautges, A. Zinke, B. Kr¨uger, J. Baumann, A. Weber, T. Helten, M. M¨uller, H. P. Seidel, and B. Eberhardt. Motion reconstruction using sparse accelerometer data. ACM Trans. Graph., 30(3):251–276, 2011.
[17] National Taiwan Normal University. 常用手語辭典, 2015.
[18] H.D. Yang, S. Sclaroff, and S.W. Lee. Sign language spotting with a threshold model based on conditional random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(7):1264–1277, 2009.
[19] Q. Ye, S. Yuan, and T.K. Kim. Spatial attention deep net with partial pso for hierarchical hybrid hand pose estimation. ECCV, abs/1604.03334, 2016.
[20] Y.-P. Zhang, T. Han, Z.-M. Ren, N. Umetani, X. Tong, Y. Liu, T. Shiratori, and X. Cao. Bodyavatar: Creating freeform 3d avatars using first-person body gestures. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST ’13, pages 387–396, 2013.

無法下載圖示 全文公開日期 2022/08/21 (校內網路)
全文公開日期 本全文未授權公開 (校外網路)
全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
QR CODE