Basic Search / Detailed Display

Author: 韓淑娟
Rini Handini
Thesis Title: Workout Evaluation Via Pose Estimation And Object Detection
Workout Evaluation Via Pose Estimation And Object Detection
Advisor: 楊傳凱
Chuan-Kai Yang
Committee: 賴源正
Yuan-Cheng Lai
林伯慎
Bor-Shen Lin
Degree: 碩士
Master
Department: 管理學院 - 資訊管理系
Department of Information Management
Thesis Publication Year: 2023
Graduation Academic Year: 111
Language: 英文
Pages: 64
Keywords (in other languages): Keypoint Estimatio, Workout Evaluation, Curl Dumbbell
Reference times: Clicks: 153Downloads: 0
Share:
School Collection Retrieve National Library Collection Retrieve Error Report


The evaluation of exercise poses using Artificial Intelligence (AI) has become increasingly popular, especially in the context of home workouts. However, current applications primarily focus on evaluating body poses without considering the involvement of sports equipment. This study aims to explore the potential of integrating sports equipment data to determine the correctness of exercise poses and develop an AI model for workout evaluation.

The specific focus of this study is on dumbbell exercise, curl. By integrating object detection from the dumbbell into the AI model, the study demonstrates that the model can learn and improve its evaluation accuracy. Thus the inclusion of sports equipment data enhances the effectiveness of workout evaluation, providing users with more comprehensive feedback on their exercise performance.

Overall, this study showcases the potential of considering sports equipment involvement in AI-based workout evaluation, paving the way for further advancements in this field and ultimately helping individuals who want to improve their exercise routines.

Recommendation Letter i Approval Letter ii Abstract in English iii Acknowledgements iv Contents vi List of Figures ix List of Tables xi 1 Introduction 1 1.1 Background 1 1.2 Contribution 3 1.3 Thesis Organization 4 2 Related Works 6 2.1 Action Recognition 6 2.2 Sport Exercise Recognition 8 2.3 Human Pose Estimation 10 2.4 Object Detection 13 3 Proposed System 16 3.1 System Overview 16 3.1.1 LSTM 17 3.2 Dataset 19 3.2.1 YOLO Dataset 20 3.2.2 Video Dataset 22 3.3 Dataset Processing 27 3.3.1 Keypoint Input 27 3.3.2 Object Detection Input 30 3.3.3 Combining Input Data 31 3.3.4 Training YOLOv5 Dataset 33 3.3.5 Training Video Dataset 38 4 Experiments & Results 42 4.1 Experimental Results 42 4.2 Limitations 44 5 Conclusion and Discussion 47 5.1 Conclusion 47 5.2 Future Works 48 Reference 50

[1] M. Ben Gamra and M. A. Akhloufi, “Yopose: Yoga posture recognition using deep pose estimation,” in 2022 3rd International Conference on Human-Centric Smart Environments for Health and Wellbeing (IHSH), pp. 88–93, 2022.
[2] S. Chen and R. R. Yang, “Pose trainer: Correcting exercise posture using pose estimation,” arXiv preprint arXiv:2006.11718, 2020.
[3] C. Lugaresi, J. Tang, H. Nash, C. McClanahan, E. Uboweja, M. Hays, F. Zhang, C.-L. Chang, M. G. Yong, J. Lee, W.-T. Chang, W. Hua, M. Georg, and M. Grundmann, “Mediapipe: A framework for building perception pipelines,” arXiv preprint arXiv:1906.08172, 2019.
[4] F. Zhang, V. Bazarevsky, A. Vakunov, A. Tkachenka, G. Sung, C.-L. Chang, and M. Grundmann, “Mediapipe hands: On-device real-time hand racking,” arXiv preprint arXiv:2006.10214, 2020.
[5] G. Jocher, “Yolov5 by ultralytics.” https://github.com/ultralytics/yolov5. Accessed: 2023-06-13.
[6] T. M. Breuel, “Benchmarking of lstm networks,” arXiv preprint arXiv:1508.02774, 2015.
[7] H. Wang, A. Kläser, C. Schmid, and C.-L. Liu, “Dense trajectories and motion boundary descriptors for action recognition,” International
journal of computer vision, vol. 103, pp. 60–79, 2013.
[8] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in Advances in Neural Information Processing Systems (Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, eds.), vol. 27, Curran Associates, Inc., 2014.
[9] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks: Towards good practices for deep action recognition,” in Computer Vision – ECCV 2016 (B. Leibe, J. Matas, N. Sebe, and M. Welling, eds.), (Cham), pp. 20–36, Springer International Publishing, 2016.
[10] E. Velloso, A. Bulling, H. Gellersen, W. Ugulino, and H. Fuks, “Qualitative activity recognition of weight lifting exercises,” in Proceedings
of the 4th Augmented Human International Conference, AH ’13, (New York, NY, USA), p. 116–123, Association for Computing Machinery, 2013.
[11] S. Kothari, “Yoga pose classification using deep learning,” 2020.
[12] D. Swain, S. Satapathy, B. Acharya, M. Shukla, V. C. Gerogiannis, A. Kanavos, and D. Giakovis, “Deep learning models for yoga pose monitoring,” Algorithms, vol. 15, no. 11, 2022.
[13] C. Zhe, H. Gines, S. Tomas, W. Shih-En, and S. Yaser, “Openpose: Realtime multi-person 2d pose estimation using part affinity fields,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, pp. 172–186, 01 2021.
[14] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate object detection and segmentation,”
IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 1, pp. 142–158, 2015.
[15] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards realtime object detection with region proposal networks,” in Advances
in Neural Information Processing Systems (C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, eds.), vol. 28, Curran Associates,
Inc., 2015.
[16] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
[17] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), July 2017.
[18] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
[19] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
[20] G. Jocher, “Yolov8 by ultralytics.” https://github.com/ultralytics/ultralytics. Accessed: 2023-06-13.

無法下載圖示 Full text public date 2025/07/24 (Intranet public)
Full text public date 2025/07/24 (Internet public)
Full text public date 2025/07/24 (National library)
QR CODE