簡易檢索 / 詳目顯示

研究生: 吳展毅
CHEN-YI WU
論文名稱: 協助遞物服務型機器人系統開發研究
Study on Development of a Robotic System with Assistive Delivery Service
指導教授: 郭永麟
Yong-Lin Kuo
口試委員: 楊振雄
Chen-Hsiung Yang
吳宗亮
Tsung-Liang Wu
王可文
Ker-Win Wang
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 150
中文關鍵詞: 服務型機器人六軸機械手臂機器人作業系統語音辨識人手辨識物件辨識
外文關鍵詞: Service Robot, Six-axis robot arm, Robot Operating System, voice recognition, hand recognition, object recognition
相關次數: 點閱:331下載:9
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著機器人產業的發展,機器人已經並非侷限於工廠中,而是以服務型機器人的型態進入人們的日常生活。目前已經出現為數不少的服務型機器人,但大多數服務型機器人都缺少能夠實際將目標商品遞給使用者的能力,因此本論文的目標在於開發一個具有遞物給使用者能力的服務型機器人。
    為了讓本服務型機器人有完整的遞物功能,本機器人接收遞給物品的語音命令後,機械手臂先以夾爪夾起目標物體,再將該物遞到人手中。而在此過程中,本機器人除了會辨識待夾取的目標物,更會辨識手掌姿態是否可以讓機械手臂成功遞放物品。因此本論文之機器人以機器人作業系統(Robot Operating System, ROS)整合了語音處理功能、物件辨識與人手辨功能、機械手臂路徑規劃功能與機械手臂控制,來完成語音命令的接收與語音回饋的合成,夾取物件和放置物件到手上的功能。
    因本研究主要聚焦在遞物機器人辨識人手的功能上,故物件方面僅以容易夾取的象棋作為夾取目標。機器人系統實驗結果如下 : 整體系統的成功率落在42 % ; 其中語音辨識有90 %的辨識率; 對於指定物件象棋的辨識率也達到90 %; 在辨識手掌張開的成功率為82 % ; 手心向上時被正確辨識的辨識率為80 % ; 夾取的成功率平均為85 % ;遞物的成功率為平均為94 % ;。而對錯誤情境處理的實驗中,成功辨識此位置機械手臂無法到達的成功率為 100 %,而辨識手掌是否張開的測試中,握拳有著 90 %正確率辨識為手掌沒有張開,但若只伸出 3 根手指則正確辨識為手掌沒有張開的正確率僅為60 %,而當反手向上時成功辨識出此為錯誤手勢辨識率為 70 %。若後續要繼續開發,首要則需增進對手掌沒有張開以及手背向上的辨識率。


    With the development of robotics industry, robots have come to personal life instead of existing in factories only. There have been many different kinds of service robots in banks, restaurants, markets, office rooms, and home. However, most of them don’t have the capability to delivery objects to users by robotic arms. Therefore, this study aims to develop a service robot which can pick up and deliver objects to users by robotic arms.
    In order to pick up and deliver objects to users by the service robot, it will receive users’ voice commands and pick up target objects. Then, robot arms will put the target object on the users’ hands in the end. In this process, the service robot will recognize the target objects and the gestures of users’ hands, which can confirm the pick-and-place process to be done successfully. In this study, Robot Operating System (ROS) is used to compose different functional systems. There are three different functional systems in this robot: a voice processing system for voice commands and speech syntheses, an image processing system for objects and hands recognizing, and a robotic arm control system for robotic arm control.
    Because this study focuses on the function of delivering objects to hands, chesses are chosen to be the target objects, which are gripped easily. The experimental results of the service robotic system are shown as follows: 42% success rate of the overall system, 90% success rate of the voice command recognition, 90% success rate of object recognition for chesses, 80% success rate of the palm-up recognition, 85% success rate of picking up chesses, and 94% success rate of delivering chesses to hands. In the process of the experiments to handle erroneous situations, there is 100% accuracy rate of recognizing that the robotic arm cannot reach target objects or hands. To test the opened-palm recognition, there are 90% accuracy rate for a fist shown and 60% accuracy rate for three extended fingers shown, and 70% accuracy rate of a hand backside shown. From the previous discussions, it can be seen that improving the recognition accuracy rate of the unopened and backside palms is one of the main future researches.

    目錄 致謝 摘要 Abstract 目錄 圖目錄 表目錄 第一章 緒論 1.1 研究背景 1.2 文獻回顧 1.2.1 服務型機器人 1.2.2 手掌姿態辨識 1.2.3 語音命令識別 1.2.4 六軸機械手臂 1.3 研究動機 1.4 研究方法 1.5 研究貢獻 1.6 論文架構 第二章 協助取物機器人設計 2.1 協助取物機器人概觀 2.1.1 機器人使用情境 2.1.2 機器人行為設計 2.1.3 機器人架構 2.2 語音處理系統 2.2.1 梅爾倒頻譜係數(MFCC) 2.2.2 動態時間校準 2.2.3單位選擇合成 2.2.4語音處理流程 2.3 影像處理系統 2.3.1 人手姿態辨識 2.3.2 手掌掌心辨識 2.3.3 殘差神經網路(RESNET) 2.3.4手掌與手背辨識 2.3.5 物件辨識 2.3.6 點雲座標轉換 2.3.7 影像處理流程 2.4 機械手臂控制系統 2.4.1 D-H 座標轉換 2.4.2運動學 2.4.3快速隨機樹演算法 2.4.4機械手臂控制流程 第三章 機器人系統架構 3.1 整體系統概觀 3.2 硬體架構介紹 3.2.1景深攝影機 3.2.2語音處理平台 3.2.3機械手臂與控制器 3.2.4系統運算平台 3.3 軟體開發平台 3.3.1機器人作業系統 3.3.2凌通開發套件 3.3.3 OpenCV 3.3.4深度學習函示庫 3.4 系統運行架構 3.4.1系統運行架構概觀 3.4.2語音處理 3.4.3影像辨識 3.4.4機械手臂控制 第四章 實驗規劃與測試 4.1 語音辨識系統測試 4.2 人手辨識系統測試 4.3 掌心位置辨識系統測試 4.4 物件辨識系統測試 4.5機械手臂控制誤差測試 4.6機械手臂取物測試 4.7機械手臂遞物測試 第五章 實驗結果與分析 5.1 機器人系統實驗 5.2物件位置問題處理實驗 5.3 手掌姿態問題處理實驗 5.4手掌位置問題處理實驗 5.5 實驗結果討論 第六章 結論與建議 6.1 結論 6.2 未來研究方向 參考文獻

    [1] Y. Wang, Z. Li, C.Y. Su, “Multisensor-Based Navigation and Control of a Mobile Service Robot.” IEEE Transactions on Robotics and Automation, vol. 51(4), 2021, pp. 2624 - 2634.
    [2] D. Lee, G. Kang, B. Kim, “Assistive Delivery Robot Application for Real-World Postal Services.” IEEE Access, vol. 9, 2021, pp. 141981 - 141998.
    [3] J. Mišeikis, P. Caroni, P. Duchamp, “Lio-A Personal Robot Assistant for Human-Robot Interaction and Care Applications.” IEEE Transactions on Robotics and Automation, vol. 5, 2020, pp. 5339 - 5346.
    [4] P.D. Puente, D. Fischinger, M. Bajones, “Grasping Objects from the Floor in Assistive Robotics: Real World Implications and Lessons Learned.” IEEE Access, vol. 7, 2019, pp. 123725- 123735.
    [5] A. Bolotnikova, S. Courtois, A. Kheddar, “Multi-Contact Planning on Humans for Physical Assistance by Humanoid.” IEEE Transactions on Robotics and Automation, vol. 5, 2020, pp 135 - 142.
    [6] A.C. Dometios, Y. Zhou, X.S. Papageorgiou, “Vision-Based Online Adaptation of Motion Primitives to Dynamic Surfaces: Application to an Interactive Robotic Wiping Task” IEEE Transactions on Robotics and Automation, vol. 3(3), 2018, pp 1410 - 1417.
    [7] D. Strazdas, J. Hintz, A.M. Felßberg, A. Al-Hamadi, “Robots and Wizards: An Investigation Into Natural Human–Robot Interaction” IEEE Access, vol. 8, 2020, pp 207635 - 207642.
    [8] M. A. Viraj, J. Muthugala, A. G. Buddhika, P. Jayasekara, “A Review of Service
    Robots Coping With Uncertain Information in Natural Language Instructions”
    IEEE Access, 2018, pp 12913 - 12928.
    [9] R. Augustauskas, A. Lipnickas, “Robust hand detection using arm segmentationfrom depth data and static palm gesture recognition” 2017 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2017, pp 664 - 667.
    [10] M. Al-Hammadi, G. Muhammad, W. Abdul, “Deep Learning-Based Approach for Sign Language Gesture Recognition with Efficient Hand Gesture Representation” 2020, pp 192527 - 192542.
    [11] D.K. Vishwakarma, R. Maheshwari, R. Kapoor, “An Efficient Approach for the Recognition of Hand Gestures from Very Low Resolution Images” 2015 Fifth International Conference on Communication Systems and Network Technologies, 2015, pp 467 - 471.
    [12] W. Wu, Q. Wang, S. Yu, “Outside Box and Contactless Palm Vein Recognition Based on a Wavelet Denoising ResNet” IEEE Access, 2021, pp 82471 - 82484.
    [13] A. L. Maas, A. Y. Hannun, A. Y. Ng, “Rectifier Nonlinearities Improve Neural Network Acoustic Models” Computer Science Department, Stanford University, CA 94305 USA, 2013, pp 82471- 82484.
    [14] S. Liu, G. Tian, Y. Zhang, P. Duan, “Scene Recognition Mechanism for Service Robot Adapting Various Families: A CNN-Based Approach Using Multi-Type Cameras” IEEE Transactions on Multimedia, 2021, pp 2392 - 2406.
    [15] A. Singh, R. Kabra, R. Kumar, “On-Device System for Device Directed Speech Detection for Improving Human Computer Interaction” IEEE Access, vol. 9, 2021, pp 131758 - 131766.
    [16] D. Brščić, T. Ikeda, T. Kanda, “Do You Need Help? A Robot Providing Information to People Who Behave Atypically” IEEE Transactions on Robotics, 2017, pp 500 - 506.
    [17] M. A. Viraj, J. Muthugala, A. G. Buddhika, P. Jayasekara, “A Review of Service Robots Coping with Uncertain Information in Natural Language Instructions” IEEE Access, 2018, pp 12913 - 12928.
    [18] M. A. Viraj, J. Muthugala, A. G. Buddhika, P. Jayasekara, “Enhancing User Satisfaction by Adapting Robot’s Perception of Uncertain Information Based on Environment and User Feedback” IEEE Access, vol. 5, 27 November 2017, pp 26435 - 26447.
    [19] N. P. Novani, M. H. Hersyah, R. Hamdanu, “Electrical Household Appliances Control using Voice Command Based on Microcontroller” 2020 International Conference on Information Technology Systems and Innovation, 2020, pp 288 -293.
    [20] 劉承泰,嵌入式語音命令系統的設計與改進,國立清華大學碩士論文,民國102年,頁數 3 - 14。
    [21] R.M. Gurav, P. K. Kadbe, “Real time finger tracking and contour detection for gesture recognition using OpenCV” 2015 International Conference on Industrial Instrumentation and Control (ICIC), 28-30 May 2015, pp 974-977.
    [22] R.K. Al-Halimi, M. Moussa, “Performing Complex Tasks by Users with Upper-Extremity Disabilities Using a 6-DOF Robotic Arm: A Study” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25(6), pp 686 - 693.
    [23] J. Hartenberg, R. Denavit, “Kinematic Synthesis of Linkages” New York: McGraw-Hill, 1964. pp 167- 175.
    [24] A. Patil, M. Kulkarni, and A. Aswale, “Analysis of the inverse kinematics for 5 DOF robot arm using D-H parameters” IEEE International Conference on Real-time Computing and Robotics, 2017, pp 688-693.
    [25] A. Verma, V. A. Deshpande, “End-effector position analysis of SCORBOT-ER V plus robot” International Journal of Smart Home, vol. 5, 2011, pp 1-6.
    [26] W. Zhang, C. Zhang, C. Li, H. Zhang, “Object color recognition and sorting robot based on OpenCV and machine vision” 2020 IEEE 11th International Conference on Mechanical and Intelligent Manufacturing Technologies (ICMIMT), 2020, pp 125-129.
    [27] H. Chaudhary, R. Prasad, N. Sukavanum, “Position analysisbased approach for trajectory tracking control of SCORBOT-ER V plus robot manipulator” International Journal of Advances in Engineering & Technology, vol. 3, 2012, pp 253-264.
    [28] H. Chaudhary, R. Prasad, “Intelligent inverse kinematic control of SCORBOT-ER V plus robot manipulator” International Journal of Advances in Engineering & Technology, vol 1, 2011, pp 158-169.
    [29] L. E. Kavraki, P. Svestka, J. C. Latombe, M. H. Overmars, “Probabilistic roadmaps for path planning in high-dimensional configuration spaces” IEEE Transactions on Robotics and Automation, vol. 12, Aug. 1996, pp 566 - 580.
    [30] L. G. D. O. Veras, F. L. L. Medeiros, L. N. F. Guimaraes, “Systematic literature review of sampling process in rapidly-exploring random trees” IEEE Access, vol. 7, 2019, pp 50933 - 50953.
    [31] Md. Sahidullah, G. Saha, “analysis and experimental evaluation of blockbased transformation in MFCC computation for speaker recognition” Speech Communication. 2012, pp 543 - 565.
    [32] K. Yu, J. Mason, J. Oglesby, “Speaker recognition using hidden Markov models”, dynamic time warping and vector Quantization. 1995, pp 313 - 318
    [33] A.S. Nikam, A.G. Ambekar, “Sign Language Recognition Using Image Based Hand Gesture Recognition Techniques” 2016 Online International Conference on Green Engineering and Technologies (IC-GET), 2016, pp 1 - 5.
    [34] K. He, X. Zhang, S. Ren, J. Sun, “Deep Residual Learning for Image Recognition” 2015, p.2-3 & 6.
    [35] J.R.R. Uijlings, “Selective search for object recognition” International journal of computer vision, 2013, pp 154 - 171.
    [36] M.K. Hu, “Visual Pattern Recognition by Moment Invariants” 1962. pp 179-187.
    [37] L. Wu, R. Crawford, J. Roberts, "An analytic approach to converting POE parameters into D-H parameters for serial-link robots" IEEE Robotics and Automation Letters, vol. 2, 2017, pp 2174 - 2179,
    [38] Y.J. Heo, K.C. Wan, “RRT-based path planning with kinematic constraints of AUV in underwater structured environment” 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) , 2013, pp 523 - 525.
    [39] M. Afifi. 11k hands dataset. https://sites.google.com/view/11khands.

    QR CODE