簡易檢索 / 詳目顯示

研究生: 胡芩瑄
Qin-Xuan Hu
論文名稱: 透過監督式機器學習法建構動作分類模型
Building a human activity recognition model with the supervised machine learning algorithm
指導教授: 林久翔
Chiu-Hsiang Lin
口試委員: 林久翔
李永輝
吳淑楷
學位類別: 碩士
Master
系所名稱: 管理學院 - 工業管理系
Department of Industrial Management
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 79
中文關鍵詞: 監督式機器學習穿戴式傳感器動作辨識
外文關鍵詞: supervised machine learning, wearable sensors, human activity recognition
相關次數: 點閱:234下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

在現今物聯網及大數據時代裡,傳統網路與穿戴式傳感器的整合是物聯網技術快速發展的其中一項因素。而在傳統工業製程管理中,工業工程師通常以觀察法為製程的績效進行評估及改善,然而透過此方法不僅會耗費許多人力及時間成本,同時每個人主觀解讀不同造成評估上的差異,也可能因視覺誤差而降低評估的精確度,因此本研究設計了三組實驗分別為基礎動作實驗、組裝及裝箱作業和組裝木箱作業,以模擬現場作業人員的動作,並利用穿戴式傳感器(XSENS)收集人員在作業時的動作資料,接著輸入至監督式機器學習演算法建立分類模型,並同時探討合適的資料預處理方法,如觀察時窗切割的大小及正規化方法對於分類模型績效的影響。研究結果顯示時窗的切割大小及正規化方法對模型的績效並未有顯著的影響,而利用隨機森林及支持向量機可以準確地分類人員動作及相應的作業,顯示了機器學習分類方法應用於辨識人員作業動作的可行性。


In the era of Internet of Things and big data, the integration of traditional networks and wearable sensors is one of the factors in the rapid development of Internet of Things technology. Traditional industrial engineers usually use observational method to evaluate and improve a manufacturing process. However, it may take up a lot of human resources and time, each person’s subjective interpretation may cause differences in evaluation, and reduce the accuracy of evaluation due to visual errors. This study designed three types of experiments: basic movement, assembling and packing cartons, assembling wooden boxes, by adopting XSENS to simulate the work of actual operators. This study input the data into the supervised machine learning algorithm to establish the classification model, and discuss the appropriate data preprocessing methods, such as the segmentation size of the time window and the normalization method, and their effect on the performance of the classification model. The research results show that the segmentation size of the time window and the normalization method do not have a significant impact on the performance of the model, and Random Forest and Support Vector Machine can accurately classify the activities and the tasks designed by this study, which shows the feasibility of applying machine learning classification methods to identify operator 's working movements.

致謝 I 摘要 II Abstract III 目錄 IV 圖目錄 VII 表目錄 IX 第1章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 2 第2章 文獻探討 3 2.1 動作辨識應用現況與裝置 3 2.1.1 人體動作辨識應用與發展 3 2.1.2 人體動作資料擷取裝置 3 2.2 動作辨識的發展程序 5 2.2.1 時窗分割(Segmentation) 5 2.2.2 特徵擷取(Feature extraction) 6 2.3 分類演算法 7 2.3.1 樸素貝葉斯(Naïve Bayes) 7 2.3.2 支持向量機(Support Vector Machine, SVM) 7 2.3.3 隨機森林(Random forest, RF) 9 2.3.4 人工神經網路(Artificial Neural Network, ANN) 9 第3章 研究方法 12 3.1 實驗設計 12 3.1.1 受測者 12 3.1.2 實驗設備 12 3.1.3 實驗內容與程序 14 3.2 資料處理與分析方法 19 3.2.1 資料預處理 20 3.2.2 分類模型訓練 23 3.2.3 模型績效評估 27 第4章 研究結果 28 4.1 E1基礎動作實驗–子動作分類 28 4.2 E2組裝及裝箱作業-主作業及子作業分類 34 4.2.1 E2組裝及裝箱主作業分類 34 4.2.2 E2T1組裝小紙箱子作業分類 38 4.2.3 E2T2組裝大紙箱子作業分類 41 4.2.4 E2T3裝箱子作業分類 44 4.3 E3組裝木箱作業-主作業及子作業分類 47 4.3.1 E3組裝木箱主作業分類 47 4.3.2 E3T1組裝小木箱子作業分類 50 4.3.3 E3T2組裝大木箱子作業分類 52 第5章 研究結果討論 55 5.1 資料預處理方法優劣(正規化及時窗分割) 55 5.2 各分類模型優劣 55 第6章 結論與展望 58 6.1 結論 58 6.2 研究限制及未來展望 59 參考文獻 61 附錄-參與研究同意書 66

Anguita, D., Ghio, A., Oneto, L., Parra, X., & Reyes-Ortiz, J. L. (2012). Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. Paper presented at the International workshop on ambient assisted living.
Anguita, D., Ghio, A., Oneto, L., Parra, X., & Reyes-Ortiz, J. L. (2013a). Energy Efficient Smartphone-Based Activity Recognition using Fixed-Point Arithmetic. J. UCS, 19(9), 1295-1314.
Anguita, D., Ghio, A., Oneto, L., Parra, X., & Reyes-Ortiz, J. L. (2013b). A public domain dataset for human activity recognition using smartphones. Paper presented at the Esann.
Aurand, A. M., Dufour, J. S., & Marras, W. S. J. J. o. b. (2017). Accuracy map of an optical motion capture system with 42 or 21 cameras in a large measurement volume. 58, 237-240.
Baldominos, A., Cervantes, A., Saez, Y., & Isasi, P. (2019). A comparison of machine learning and deep learning techniques for activity recognition using mobile devices. Sensors, 19(3), 521.
Banos, O., Garcia, R., Holgado-Terriza, J. A., Damas, M., Pomares, H., Rojas, I., . . . Villalonga, C. (2014). mHealthDroid: a novel framework for agile development of mobile health applications. Paper presented at the International workshop on ambient assisted living.
Bengio, Y. (2013). Deep learning of representations: Looking forward. Paper presented at the International Conference on Statistical Language and Speech Processing.
Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
Bulling, A., Blanke, U., & Schiele, B. (2014). A tutorial on human activity recognition using body-worn inertial sensors. ACM Computing Surveys (CSUR), 46(3), 1-33.
Burns, D. M., Leung, N., Hardisty, M., Whyne, C. M., Henry, P., & McLachlin, S. (2018). Shoulder physiotherapy exercise recognition: machine learning the inertial signals from a smartwatch. Physiological measurement, 39(7), 075007.
Burns, D. M., & Whyne, C. M. (2018). Seglearn: a python package for learning sequences and time series. The Journal of Machine Learning Research, 19(1), 3238-3244.
Chavarriaga, R., Sagha, H., Calatroni, A., Digumarti, S. T., Tröster, G., Millán, J. d. R., & Roggen, D. (2013). The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letters, 34(15), 2033-2042.
Chen, B., Wan, J., Shu, L., Li, P., Mukherjee, M., & Yin, B. (2017). Smart factory of industry 4.0: Key technologies, application case, and challenges. Ieee Access, 6, 6505-6519.
Chetty, G., & White, M. (2016). Body sensor networks for human activity recognition. Paper presented at the 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN).
Dang, L. M., Min, K., Wang, H., Piran, M. J., Lee, C. H., & Moon, H. (2020). Sensor-based and vision-based human activity recognition: A comprehensive survey. Pattern recognition, 108, 107561.
Fortino, G., Galzarano, S., Gravina, R., & Li, W. (2015). A framework for collaborative computing and multi-sensor data fusion in body sensor networks. Information Fusion, 22, 50-70.
Gravina, R., Alinia, P., Ghasemzadeh, H., & Fortino, G. (2017). Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Information Fusion, 35, 68-80.
Hashemi, M. J. J. o. B. D. (2019). Enlarging smaller images before inputting into convolutional neural network: zero-padding vs. interpolation. 6(1), 1-13.
Hassan, M. M., Uddin, M. Z., Mohamed, A., & Almogren, A. (2018). A robust human activity recognition system using smartphone sensors and deep learning. Future Generation Computer Systems, 81, 307-313.
He, H., Tan, Y., & Zhang, W. (2018). A wavelet tensor fuzzy clustering scheme for multi-sensor human activity recognition. Engineering Applications of Artificial Intelligence, 70, 109-122.
Ignatov, A. (2018). Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Applied Soft Computing, 62, 915-922.
Inoue, M., Inoue, S., & Nishida, T. (2018). Deep recurrent neural network for mobile human activity recognition with high throughput. Artificial Life and Robotics, 23(2), 173-185.
Jalal, A., Kim, Y.-H., Kim, Y.-J., Kamal, S., & Kim, D. (2017). Robust human activity recognition from depth video using spatiotemporal multi-fused features. Pattern recognition, 61, 295-308.
Janidarmian, M., Roshan Fekr, A., Radecka, K., & Zilic, Z. (2017). A comprehensive analysis on wearable acceleration sensors in human activity recognition. Sensors, 17(3), 529.
Janidarmian, M., Roshan Fekr, A., Radecka, K., Zilic, Z., & Ross, L. (2015). Analysis of motion patterns for recognition of human activities. Paper presented at the Proceedings of the 5th EAI International Conference on Wireless Mobile Communication and Healthcare.
Keceli, A. S., & Can, A. B. (2014). Recognition of basic human actions using depth information. International Journal of Pattern Recognition and Artificial Intelligence, 28(02), 1450004.
Keogh, E., Chu, S., Hart, D., & Pazzani, M. (2001). An online algorithm for segmenting time series. Paper presented at the Proceedings 2001 IEEE international conference on data mining.
Lara, O. D., & Labrador, M. A. (2012). A survey on human activity recognition using wearable sensors. IEEE communications surveys & tutorials, 15(3), 1192-1209.
Liu, Y., Nie, L., Liu, L., & Rosenblum, D. S. (2016). From action to activity: sensor-based activity recognition. Neurocomputing, 181, 108-115.
Micucci, D., Mobilio, M., & Napoletano, P. J. A. S. (2017). Unimib shar: A dataset for human activity recognition using acceleration data from smartphones. 7(10), 1101.
Mistry, J., & Inden, B. (2018). An approach to sign language translation using the Intel Realsense camera. Paper presented at the 2018 10th Computer Science and Electronic Engineering (CEEC).
Nagymáté, G., & Kiss, R. M. J. R. I. i. M. (2018). Application of OptiTrack motion capture systems in human movement analysis: A systematic literature review. 5(1.), 1-9.
Pienaar, S. W., & Malekian, R. (2019). Human activity recognition using LSTM-RNN deep neural network architecture. Paper presented at the 2019 IEEE 2nd Wireless Africa Conference (WAC).
Pires, I. M., Hussain, F., Garcia, N. M., Lameski, P., & Zdravevski, E. (2020). Homogeneous Data Normalization and Deep Learning: A Case Study in Human Activity Classification. Future Internet, 12(11), 194.
Preece, S. J., Goulermas, J. Y., Kenney, L. P., Howard, D., Meijer, K., & Crompton, R. (2009). Activity identification using body-mounted sensors—a review of classification techniques. Physiological measurement, 30(4), R1.
Qin, Z., Zhang, Y., Meng, S., Qin, Z., & Choo, K.-K. R. (2020). Imaging and fusing time series for wearable sensor-based human activity recognition. Information Fusion, 53, 80-87.
Reiss, A., & Stricker, D. (2012). Introducing a new benchmarked dataset for activity monitoring. Paper presented at the 2012 16th International Symposium on Wearable Computers.
Reyes-Ortiz, J.-L., Oneto, L., Samà, A., Parra, X., & Anguita, D. (2016). Transition-aware human activity recognition using smartphones. Neurocomputing, 171, 754-767.
Rivera, P., Valarezo, E., Choi, M.-T., & Kim, T.-S. (2017). Recognition of human hand activities based on a single wrist imu using recurrent neural networks. International Journal of Pharma Medicine and Biological Sciences, 6(4), 114-118.
San-Segundo, R., Echeverry-Correa, J. D., Salamea, C., & Pardo, J. M. (2016). Human activity monitoring based on hidden Markov models using a smartphone. IEEE Instrumentation & Measurement Magazine, 19(6), 27-31.
Sefen, B., Baumbach, S., Dengel, A., & Abdennadher, S. (2016). Human activity recognition. Paper presented at the Proceedings of the 8th International Conference on Agents and Artificial Intelligence. SCITEPRESS-Science and Technology Publications, Lda.
Shoaib, M., Bosch, S., Incel, O. D., Scholten, H., & Havinga, P. J. (2016). Complex human activity recognition using smartphone and wrist-worn motion sensors. Sensors, 16(4), 426.
Stisen, A., Blunck, H., Bhattacharya, S., Prentow, T. S., Kjærgaard, M. B., Dey, A., . . . Jensen, M. M. (2015). Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. Paper presented at the Proceedings of the 13th ACM conference on embedded networked sensor systems.
Subasi, A., Radhwan, M., Kurdi, R., & Khateeb, K. (2018). IoT based mobile healthcare system for human activity recognition. Paper presented at the 2018 15th learning and technology conference (L&T).
Sztyler, T., & Stuckenschmidt, H. (2016). On-body localization of wearable devices: An investigation of position-aware activity recognition. Paper presented at the 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom).
Um, T. T., Babakeshizadeh, V., & Kulić, D. (2017). Exercise motion classification from large-scale wearable sensor data using convolutional neural networks. Paper presented at the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Vrigkas, M., Nikou, C., & Kakadiaris, I. A. (2015). A review of human activity recognition methods. Frontiers in Robotics and AI, 2, 28.
Wang, J., Chen, Y., Hao, S., Peng, X., & Hu, L. (2019). Deep learning for sensor-based activity recognition: A survey. Pattern Recognition Letters, 119, 3-11. doi:10.1016/j.patrec.2018.02.010
Wang, J., Zhang, X., Gao, Q., Ma, X., Feng, X., & Wang, H. (2016). Device-free simultaneous wireless localization and activity recognition with wavelet feature. IEEE Transactions on Vehicular Technology, 66(2), 1659-1669.
Widodo, A., Yang, B.-S. J. M. s., & processing, s. (2007). Support vector machine in machine condition monitoring and fault diagnosis. 21(6), 2560-2574.
Xu, L., Yang, W., Cao, Y., & Li, Q. (2017). Human activity recognition based on random forests. Paper presented at the 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD).
Zappi, P., Lombriser, C., Stiefmeier, T., Farella, E., Roggen, D., Benini, L., & Tröster, G. (2008). Activity recognition from on-body sensors: accuracy-power trade-off by dynamic sensor selection. Paper presented at the European Conference on Wireless Sensor Networks.
Zhang, H., Xiao, Z., Wang, J., Li, F., & Szczerbicki, E. (2019). A novel IoT-perceptive human activity recognition (HAR) approach using multihead convolutional attention. IEEE Internet of Things Journal, 7(2), 1072-1080.
Zhu, C., Sheng, W. J. P., & Computing, M. (2011). Motion-and location-based online human daily activity recognition. 7(2), 256-269.

無法下載圖示 全文公開日期 2024/06/25 (校內網路)
全文公開日期 本全文未授權公開 (校外網路)
全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
QR CODE