簡易檢索 / 詳目顯示

研究生: 徐明睿
Ming-Jui Hsu
論文名稱: 視覺化動作學習自動評分系統
A Motion Learning System with Visual Cues and Scoring
指導教授: 楊傳凱
Chuan-Kai Yang
口試委員: 林伯慎
Bor-Shen Lin
花凱龍
Kai-Long Hua
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 56
中文關鍵詞: Kinect動態時間校正即時視覺化動作比對自動評分系統
外文關鍵詞: Kinect, Dynamic Time Warping, Real-time Visualization Motion Comparison, Auto Scoring System
相關次數: 點閱:234下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  本論文提出了一個視覺化動作學習系統,利用於各種類型的動作,透過系統
視覺化的反饋,讓學習者更易了解動作間的差異性,以提升學習成效。

  為視覺化顯示並讓使用者有沉浸學習感,我們建置學習者及教學者本身的虛
擬化身來執行動作,並根據不同使用者的身材比例重新標準化,以減少身材上的
誤差值,並增加精確度。

  利用體感偵測器我們能捕捉人體的關節特徵結點,系統記錄教學者的動作,
使教學者不必再重複教學,並且學習者可以透過系統在任何適合的地點使用,以
致能節省教學者重複教學的時間成本,並提升學習者重複學習的便利性。

  本系統可減少動作辨識及動作學習的侷限性,且透過整體的分數回饋、即時
顏色差異顯示、即時評分反饋和動作路徑的殘留特效等,來提供教學者及學習者
客觀的依據作為參考。


 In this thesis, we propose a system for motion learning with visual cues and
scoring. It can be used for all kinds of motions. Through systematic visual feedback, learners can easily understand the differences and improve their learning efficiency and outcomes.

In order to visually display and make users feel immersed in the learning process, virtual avatars of learners and teachers perform the desired motions. According to the information of different users we perform standardization,to reduce the error and increase accuracy of the system.

A somatosensory sensor is used to capture the joint feature nodes of a human
body. The system records the movements of a teacher, so that the teacher does
not have to repeat the motion. A learner can use the system in any suitable place, so as to save the timing cost of repeated teaching and improve the convenience of repeated learning.

The system reduces the limitations of motion recognition and learning. Through
the scoring feedback, the real-time color difference display, the real-time movement evaluation feedback and the movement path residual effect, can assist the teacher and the learner greatly in the learning process.

推薦書.................................................................... I 審定書.................................................................... II 中文摘要.................................................................. III 英文摘要.................................................................. IV 誌謝...................................................................... V 目錄..................................................................... VI 圖目錄.................................................................... IX 表目錄.................................................................... XI 第一章 緒論.............................................................. 1 1.1 研究動機.................................................................... 1.2 研究目的................................................................... 2 1.3 論文架構................................................................... 2 第二章 文獻探討......................................................... 3 2.1 虛擬模型建立............................................................... 3 2.2 骨架綁定....................................................................3 2.3 體感偵測................................................................... 3 2.4 動態時間校正............................................................... 4 2.5 加速DTW .................................................................. 4 2.6 動作學習相關應用.......................................................... 5 2.7 與其他系統比較............................................................ 8 第三章 系統及研究方法................................................... 9 3.1 系統架構與流程............................................................ 9 3.1.1 第一步:建模及上骨架. . . . . . . . . . . . . . . . . . . . . . 9 3.1.2 第二步:教學者錄製動作: . . . . . . . . . . . . . . . . . . . 11 3.1.3 第三步:系統操作 . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.4 流程總覽 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 系統設定..................................................................14 3.2.1 模型建置 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.2 自動綁定骨架 . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.3 整合架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.4 錄製標準 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3 評分方法.................................................................... 15 3.3.1 動態時間校正 . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3.2 改進DTW演算法 . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.3 新適應演算法 . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.4 分數計算 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3.5 標準化調整 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4 視覺化回饋................................................................ 19 3.4.1 顏色表達 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4.2 骨架上色 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4.3 移動路徑上色 . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4.4 顯示各節點分數. . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4.5 錯誤折線圖 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.4.6 動作錄影回放 . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 第四章 結果展示......................................................... 23 4.1 系統介面介紹............................................................... 23 4.1.1 錄製介面 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.1.2 學習比對介面 . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2 模型標準化................................................................. 25 4.3 顏色反饋呈現............................................................... 26 4.3.1 錯誤率折線圖 . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3.2 分數呈現 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.4 模型角度旋轉............................................................... 29 4.5 運算頻率設定............................................................... 29 4.6 影像回放.................................................................. 30 第五章 實驗結果......................................................... 31 5.1 系統環境.................................................................. 31 5.2 動作設計.................................................................. 32 5.3 實驗結果.................................................................. 34 5.3.1 實驗一. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 5.3.2 實驗二. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.3.3 實驗三. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.4 實驗探討.................................................................. 37 5.4.1 實驗二分數探討. . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.4.2 DTW調整係數 . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.4.3 動作正確性和速度性的探討 . . . . . . . . . . . . . . . . . . . 41 5.4.4 兩演算法應用與比較 . . . . . . . . . . . . . . . . . . . . . . . 41 第六章 結論與未來展望................................................... 42 參考文獻.................................................................. 43

[1] Autodesk. https://www.autodesk.com.tw/.

[2] Azure kinect. https://azure.microsoft.com/zh-tw/services/kinect-dk/.

[3] Mixamo. https://www.mixamo.com/.

[4] Skanect. https://skanect.occipital.com/.

[5] Donald J. Berndt and James Clifford. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech,
and Signal Processing, 26(1):43–49, 1978.

[6] An-Ti Chiang, Qi Chen, Yao Wang, and Mei R. Fu. Kinect-based in-home exercise system for lymphatic health and lymphedema intervention. IEEE Journal
of Translational Engineering in Health and Medicine, 6, 2018.

[7] Tianyu He, Xiaoming Chen, Zhibo Chen, Ye Li, Sen Liu, Junhui Hou, and
Ying He. Immersive and collaborative taichi motion learning in various vr
environments. 2017 IEEE Virtual Reality (VR), 2017.

[8] Hajar Hiyadi, Fakhreddine Ababsa, Christophe Montagne, El Houssine
Bouyakhf, and Fakhita Regragui. A depth-based approach for 3d dynamic gesture recognition. 2015 12th International Conference on Informatics in Control,
Automation and Robotics (ICINCO), 2015.

[9] F. Itakura. Minimum prediction residual principle applied to speech recognition.
Acoustics, Speech and Signal Processing, IEEE Transactions on, 23(1):67–72,
1975.

[10] Dohyung Kim, Minsu Jang, Youngwoo Yoon, and Jaehong Kim. lassification of
dance motions with depth cameras using subsequence dynamic time warping.
2015 8th International Conference on Signal Processing, Image Processing and
Pattern Recognition (SIP), 2013.

[11] Edmond Lau and Annie Ding. Musicdb: A query by humming system. 2005.

[12] Ting-Yang Lin, Chung-Hung Hsieh, and Jiann-Der Lee. Kinect-based system
for physical rehabilitation: Utilizing tai chi exercises to improve movement
disorders in patients with balance ability. IEEE 2013 7th Asia Modelling Symposium, 2013.

[13] Bo Mu, YuHui Yang, and JianPing Zhang. Implementation of the interactive
gestures of virtual avatar based on a multi-user virtual learning environment.
2009 International Conference on Information Technology and Computer Science, 1:613–617, 2009.

[14] Reza Haghighi Osgouei, David Soulsbv, and Fernando Bello. An objective
evaluation method for rehabilitation exergames. IEEE Games, Entertainment,
Media Conference (GEM), 2018.

[15] Senin P. Dynamic time warping algorithm review, technical report.
http://csdl.ics.hawaii.edu/techreports/08-04/08-04.pdf, 2008.

[16] R. Ravanelli, L. Lastilla, and M. Crespi. 3d modelling by low-cost range camera:
Software evaluation and comparison. International Archives of the Photogrammetry, Remote Sensing Spatial Information Sciences, 42(2):209–212, 2017.

[17] Chuan-Jun Su. Personal rehabilitation exercise assistant with kinect and dynamic time warping. International Journal of Information and Education Technology, 3(4):448–454, 2004.

[18] Tuukka M. Takala, Roberto Pugliese, Pivi Rauhamaa, and Tapio Takala.
Reality-based user interface system (RUIS). IEEE Symposium on 3D User
Interfaces (3DUI), pages 141–142, 2011.

[19] Zhengyou Zhang. Microsoft kinect sensor and its effect. IEEE MultiMedia,
19(2):4–10, 2012.

無法下載圖示 全文公開日期 2022/07/24 (校內網路)
全文公開日期 本全文未授權公開 (校外網路)
全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
QR CODE