研究生: |
林尚諭 Shang-Yu Lin |
---|---|
論文名稱: |
雙通道卷積神經網路之機械臂影像伺服控制方法 Image Servo Control of a Robotic Arm by Using Two-stream Convolutional Neural Networks |
指導教授: |
施慶隆
Ching-Long Shih |
口試委員: |
黃志良
Chih-Lyang Hwang 李文猶 Wen-Yo Lee 吳修明 Hsiu-Ming Wu |
學位類別: |
碩士 Master |
系所名稱: |
電資學院 - 電機工程系 Department of Electrical Engineering |
論文出版年: | 2021 |
畢業學年度: | 109 |
語文別: | 中文 |
論文頁數: | 76 |
中文關鍵詞: | 機械臂 、影像伺服控制 、機器學習 、卷積神經網路 、物體定位 |
外文關鍵詞: | Robotic arm, Image Servo Control, Machine Learning, Convolutional Neural Network, Object Localization |
相關次數: | 點閱:271 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文旨在運用機械臂的影像與卷積神經網路實現機械臂伺服控制。主要利用卷積神經網路能萃取圖像的特徵並映射非線性的對應關係至工作空間中的座標的特性,進而運用基於卷積神經網路的雙通道控制網路以達成機械臂影像伺服控制。訓練集的收集主要為固定目標物於工作平台並控制機械臂至不同姿態進行影像拍攝,並記錄機械臂的6個自由度參數當作訓練樣本。接著透過雙通道控制網路將當前姿態之影像及設定的理想影像當作輸入端,經由預訓練好之神經網路預測出6個機械臂自由度更新當前姿態,並重複將當前姿態之影像回傳輸入至神經網路,逐步更新姿態,使當前姿態之影像與設定之理想影像相符,即達成運用機械臂影像透過卷積神經網路實現機械臂影像伺服控制之目的。
The purpose of this thesis is to achieve the servo control of robotic arm by applying convolutional neural networks with images from the camera. Convolutional neural network has an advantage in extracting the features from images and, after non-linear calculation, mapping these to relative coordinates in the workspace. Based on this benefit, we built up two-stream convolutional neural networks to achieve servo control by directing from image feedback. The training set is collected from a fixed target on the working platform where the robotic arm can take images from different poses.Then the six degrees of freedom parameters of the robotic arm are recorded as training samples after each trial. Through the two-stream convolutional neural networks,the system can compare the image of the current pose to the image of the desired pose and uses completed training neural networks to predict six DOFs movement of the robotic arm from the current pose. As receiving image feedback and correcting the error constantly, the system reachs the desired position,that is the purpose of using convolutional neural networks to realize servo control of robotic arm by images.
[1] S. Hutchinson, G. D. Hager and P. I. Corke, "A tutorial on visual servo control," in IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651-670, Oct. 1996, doi: 10.1109/70.538972.
[2] A. Al-Shanoon, A. Hao Tan, H. Lang and Y. Wang, "Mobile robot regulation with position based visual servoing," 2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2018, pp. 1-6, doi: 10.1109/CIVEMSA.2018.8439978.
[3] T. Yüksel, "IBVS with fuzzy sliding mode for robot manipulators," 2015 International Workshop on Recent Advances in Sliding Modes (RASM), 2015, pp. 1-6, doi: 10.1109/RASM.2015.7154590.
[4] Corneliu Lazar and Adrian Burlacu,“Image-based visual servoing for manipulation via predictive control – A Survey of Some Reslts,” Memoirs of the Scientific Sections of the Romanian Academy,Technical Report,2016
[5] Q. Chen, S. Zhu, X. Wang and W. Wu, "Analysis on an uncalibrated image-based visual servoing for 6 DOF industrial welding robots," 2012 IEEE International Conference on Mechatronics and Automation, 2012, pp. 2013-2018, doi: 10.1109/ICMA.2012.6285131.
[6] Hanqi Zhuang, Kuanchih Wang and Z. S. Roth, "Simultaneous calibration of a robot and a hand-mounted camera," in IEEE Transactions on Robotics and Automation, vol. 11, no. 5, pp. 649-660, Oct. 1995, doi: 10.1109/70.466601.
[7] Y. Kuniyoshi, M. Inaba and H. Inoue, "Learning by watching: extracting reusable task knowledge from visual observation of human performance," in IEEE Transactions on Robotics and Automation, vol. 10, no. 6, pp. 799-822, Dec. 1994, doi: 10.1109/70.338535..
[8] Yusuke Maeda1 and Takahito Nakamura, View-based teaching/playback for robotic manipulation,ROBOMECH Journal,2:2,pp.1-12,2015.
[9] K. Hwang, J. Lee, Y. L. Hwang and W. Jiang, "Image base visual servoing base on reinforcement learning for robot arms," 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), 2017, pp. 566-569, doi: 10.23919/SICE.2017.8105453.
[10] K.Shibata and M.Iida,Acquisition of box pushing by direct vision-based reinforcement learning,Proceeding of SCIE Annual Conference,pp.1378-1383,2003.
[11] Albert Zhan, Philip Zhao, Lerrel Pinto, Pieter Abbeel, Michael Laskin"A framework for efficient robotic manipulation", arXiv preprint arXiv:2012.07975 [cs.RO]
[12] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998, doi: 10.1109/5.726791.
[13] Y.LeCun et al.,“Backpropagation applied to handwritten Zip code recognition“Neural Computation,Vol. 1,No. 4,pp.541-551, 1989.
[14] I.Lenz,H.Lee and A.Saxena,“Deep learning for dectection robotic grasps,”International Journal of Robotics Research,Vol.34,No. 4-5,pp.705-724,2013.
[15] J. Redmon and A. Angelova, "Real-time grasp detection using convolutional neural networks," 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 1316-1322, doi: 10.1109/ICRA.2015.7139361.
[16] K.Simonyan and A.Zisserman,"Two-stream convolutional networks for action
recognition in videos",arXiv preprint arXiv:1406.2199, 2014 - arxiv.org.