簡易檢索 / 詳目顯示

研究生: Yohan Prakoso
Yohan Prakoso
論文名稱: Hand Gesture Based Remote Control of Mobile Robot and Arm Manipulator
Hand Gesture Based Remote Control of Mobile Robot and Arm Manipulator
指導教授: 施慶隆
Ching-Long Shih
口試委員: 黃志良
Chih-Lyang Hwang
李文猶
Wen-Yo Lee
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 101
中文關鍵詞: human machine interfacevision based controllerhand gesture recognitionomni mobile robotrobot arm
外文關鍵詞: human machine interface, vision based controller, hand gesture recognition, omni mobile robot, robot arm
相關次數: 點閱:304下載:10
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • The application of robots developed ubiquitously this day. Some of them are automatically controlled and the others are manually controlled. This thesis implements a manual human-machine interface by using a webcam to avoid the friction like in mechanical device and attached cable which reduce user’s movement dexterity like in wearable device. The implementation method used is accomplished by extracting the information of the hand grasping and pointing gestures. The grasping gesture is used to control the position of the robot arm end effector while the pointing gesture is used to control the speed of the omni mobile robot. The extraction processes of the information from the hand gesture are involve thresholding, morphology filtering, contour finding, maximum inscribed circle finding, polygon contour approximating, convexity defect finding, and minimum enclosing circle finding. The experimental result shows that the method used accurately detects the position of hand and fingertips. The extracted information is successfully processed into the commands to control the armed-mobile-robot movement and capable to perform picking and moving objects.


    The application of robots developed ubiquitously this day. Some of them are automatically controlled and the others are manually controlled. This thesis implements a manual human-machine interface by using a webcam to avoid the friction like in mechanical device and attached cable which reduce user’s movement dexterity like in wearable device. The implementation method used is accomplished by extracting the information of the hand grasping and pointing gestures. The grasping gesture is used to control the position of the robot arm end effector while the pointing gesture is used to control the speed of the omni mobile robot. The extraction processes of the information from the hand gesture are involve thresholding, morphology filtering, contour finding, maximum inscribed circle finding, polygon contour approximating, convexity defect finding, and minimum enclosing circle finding. The experimental result shows that the method used accurately detects the position of hand and fingertips. The extracted information is successfully processed into the commands to control the armed-mobile-robot movement and capable to perform picking and moving objects.

    ABSTRACT I ACKNOWLEDGMENT II NOMENCLATURE III TABLE OF CONTENTS VI LIST OF FIGURES VIII LIST OF TABLES XII CHAPTER 1 1 1.1 Research background 1 1.2 Organization of this thesis 2 CHAPTER 2 4 2.1 System block diagram 4 2.2 Omni mobile robot 5 2.3 Robot arm design 8 2.4 Full robot assembly 10 CHAPTER 3 12 3.1 Webcam placement 13 3.2 Skin color segmentation 15 3.3 Hand feature extraction 20 3.4 Grip level estimation 29 3.5 Finger Angle 31 3.6 Motion command extraction 32 CHAPTER 4 36 4.1 Overview 36 4.2 Threading system 36 4.2.1 Vision thread 37 4.2.2 Trajectory thread 38 4.2.3 Serial thread 43 4.2.4 Main thread 45 4.3 Arm kinematic system 47 4.4 Omni-mobile robot kinematic system 50 4.5 Position control of omni-wheeled mobile robot 52 CHAPTER 5 53 5.1 Arduino setting 53 5.1.1 Motor servo PWM setting 53 5.1.2 Motor stepper pulse setting 56 5.1.3 Motor stepper position estimation setting 60 5.2 Streaming system 61 5.3 Power system of the robot 64 5.4 Hand feature extraction result 65 5.4.1 Maximum inscribed circle and fingertips candidate features result 65 5.4.2 Gripper level extraction process result 66 5.4.3 The finger angle experimental result 68 5.4.4 Z position extraction process results 69 5.5 Mobile robot motion integration and demonstration 71 CHAPTER 6 81 6.1 Conclusion 81 6.2 Future Work 82 REFERENCES 83

    [1] C. Sokho, K. Jungtae, K. Insup, H. B. Jin, L. Chongwon, and P. Jong Oh, “KIST teleoperation system for humanoid robot,” in Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), 1999, vol. 2, pp. 1198–1203.
    [2] E. R. Bachmann, R. B. McGhee, X. Yun, and M. J. Zyda, “Inertial and magnetic posture tracking for inserting humans into networked virtual environments,” in Proceedings of the ACM symposium on Virtual reality software and technology - VRST ’01, 2001, pp. 9–16.
    [3] S. Ganeson, R. Ambar, and M. M. A. Jamil, “Design of a low-cost instrumented glove for hand rehabilitation monitoring system,” in 2016 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), 2016, pp. 189–192.
    [4] N. A. Andrade, G. A. Borges, F. A. de O. Nascimento, A. R. S. Romariz, and A. F. da Rocha, “A new biomechanical hand prosthesis controlled by surface electromyographic signals,” in 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007, pp. 6141–6144.
    [5] Y. Jung, D. Kang, and J. Kim, “Upper body motion tracking with inertial sensors,” in 2010 IEEE International Conference on Robotics and Biomimetics, 2010, pp. 1746–1751.
    [6] A. Zabatani et al., “Intel® RealSenseTM SR300 Coded light depth Camera,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–1, 2019.
    [7] N. M. DiFilippo and M. K. Jouaneh, “Characterization of Different Microsoft Kinect Sensor Models,” IEEE Sens. J., vol. 15, no. 8, pp. 4554–4564, Aug. 2015.
    [8] B. Teke, M. Lanz, J. K. Kämäräinen, and A. Hietanen, “Real-time and Robust Collaborative Robot Motion Control with Microsoft Kinect ® v2,” in 2018 14th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, MESA 2018, 2018, pp. 1–6.
    [9] M. Ashmore and N. Barnes, “Omni-drive Robot Motion on Curved Paths: The Fastest Path between Two Points Is Not a Straight-Line,” 2002, pp. 225–236.
    [10] T. Baede, “Motion Control of an Omnidirectional Mobile Robot,” Eindhoven, 2006.
    [11] H. Asama, M. Sato, L. Bogoni, H. Kaetsu, A. Mitsumoto, and I. Endo, “Development of an omni-directional mobile robot with 3 DOF decoupling drive mechanism,” in Proceedings of 1995 IEEE International Conference on Robotics and Automation, 1995, vol. 2, pp. 1925–1930.
    [12] R. Damoto, W. Cheng, and S. Hirose, “Holonomic omnidirectional vehicle with new omni-wheel mechanism,” in Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), 2006, vol. 1, pp. 773–778.
    [13] “Industrial Robots and Robot System Safety,” United State Departement of Labor. [Online]. Available: https://www.osha.gov/dts/osta/otm/otm_iv/otm_iv_4.html#2.
    [14] B. Z. Slander, Robotics Designing The Mechanism for Automated Machinery, Second Edi. Beersheva, 1998.
    [15] K. Jahnavi and P. Sivraj, “Teaching and learning robotic arm model,” in 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), 2017, pp. 1570–1575.
    [16] UFACTORY, “uArm Metal Intro - 4 dof desktop robot arm,” Youtube, 2016. [Online]. Available: https://www.youtube.com/watch?v=VeZOi11NQRA. [Accessed: 29-Oct-2019].
    [17] D. Chai and K. N. Ngan, “Face segmentation using skin-color map in videophone applications,” IEEE Trans. Circuits Syst. Video Technol., vol. 9, no. 4, pp. 551–564, Jun. 1999.
    [18] T. M. Mahmoud, “A New Fast Skin Color Detection Technique,” Int. J. Comput. Inf. Eng., vol. 2, pp. 2354–2358, 2008.
    [19] K. S. Sanjay, D. S. Chauhan, V. Mayank, and S. Richa, “A Robust Skin Color Based Face Detection Algorithm,” vol. 6, pp. 227–234, 2003.
    [20] A. Kaehler and G. Bradski, Learning OpenCV 3: Computer vision in C++ with the OpenCV library, First Edit. Sebastopol: O’Reilly, 2017.
    [21] “Contours : Getting Started,” OpenCV:Open Source Computer Vision, 2019. [Online]. Available: https://docs.opencv.org/3.4/d4/d73/tutorial_py_contours_begin.html. [Accessed: 07-Nov-2019].
    [22] “Contour Features,” OpenCV:Open Source Computer Vision, 2019. [Online]. Available: https://docs.opencv.org/3.4/dd/d49/tutorial_py_contour_features.html. [Accessed: 07-Nov-2019].
    [23] “Structural Analysis and Shape Descriptors¶,” OpenCV, 2019. [Online]. Available: https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=pointpolygontest. [Accessed: 07-Nov-2019].
    [24] “Operations on Arrays,” OpenCV, 2019. [Online]. Available: https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#void minMaxLoc(InputArray src, double* minVal, double* maxVal, Point* minLoc, Point* maxLoc, InputArray mask). [Accessed: 07-Nov-2019].
    [25] H. S. Yeo, B. G. Lee, and H. Lim, “Hand tracking and gesture recognition system for human-computer interaction using low-cost hardware,” Multimed. Tools Appl., vol. 74, no. 8, pp. 2687–2715, Apr. 2015.
    [26] “Python cv2.convexityDefects() Examples,” ProgramCreek. [Online]. Available: https://www.programcreek.com/python/example/89457/cv2.convexityDefects. [Accessed: 07-Nov-2019].
    [27] “Contours : More Functions,” OpenCV-Python Tutorials, 2013. [Online]. Available: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contours_more_functions/py_contours_more_functions.html. [Accessed: 07-Nov-2019].
    [28] X. Xing-zhe, W. Heng, L. Ran, X. Wen-qiang, and J. Ming, “3D Terrain Reconstruction for Patrol Robot Using Point Grey Research Stereo Vision Cameras,” in 2010 International Conference on Artificial Intelligence and Computational Intelligence, 2010, pp. 47–51.

    QR CODE