簡易檢索 / 詳目顯示

研究生: 林尚諭
Shang-Yu Lin
論文名稱: 雙通道卷積神經網路之機械臂影像伺服控制方法
Image Servo Control of a Robotic Arm by Using Two-stream Convolutional Neural Networks
指導教授: 施慶隆
Ching-Long Shih
口試委員: 黃志良
Chih-Lyang Hwang
李文猶
Wen-Yo Lee
吳修明
Hsiu-Ming Wu
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 76
中文關鍵詞: 機械臂影像伺服控制機器學習卷積神經網路物體定位
外文關鍵詞: Robotic arm, Image Servo Control, Machine Learning, Convolutional Neural Network, Object Localization
相關次數: 點閱:271下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文旨在運用機械臂的影像與卷積神經網路實現機械臂伺服控制。主要利用卷積神經網路能萃取圖像的特徵並映射非線性的對應關係至工作空間中的座標的特性,進而運用基於卷積神經網路的雙通道控制網路以達成機械臂影像伺服控制。訓練集的收集主要為固定目標物於工作平台並控制機械臂至不同姿態進行影像拍攝,並記錄機械臂的6個自由度參數當作訓練樣本。接著透過雙通道控制網路將當前姿態之影像及設定的理想影像當作輸入端,經由預訓練好之神經網路預測出6個機械臂自由度更新當前姿態,並重複將當前姿態之影像回傳輸入至神經網路,逐步更新姿態,使當前姿態之影像與設定之理想影像相符,即達成運用機械臂影像透過卷積神經網路實現機械臂影像伺服控制之目的。


    The purpose of this thesis is to achieve the servo control of robotic arm by applying convolutional neural networks with images from the camera. Convolutional neural network has an advantage in extracting the features from images and, after non-linear calculation, mapping these to relative coordinates in the workspace. Based on this benefit, we built up two-stream convolutional neural networks to achieve servo control by directing from image feedback. The training set is collected from a fixed target on the working platform where the robotic arm can take images from different poses.Then the six degrees of freedom parameters of the robotic arm are recorded as training samples after each trial. Through the two-stream convolutional neural networks,the system can compare the image of the current pose to the image of the desired pose and uses completed training neural networks to predict six DOFs movement of the robotic arm from the current pose. As receiving image feedback and correcting the error constantly, the system reachs the desired position,that is the purpose of using convolutional neural networks to realize servo control of robotic arm by images.

    摘要 I Abstract II 致謝 III 目錄 IV 圖目錄 VI 表目錄 VIII 第1章 緒論 1 1.1 研究動機與目的 1 1.2 文獻回顧 1 1.3 論文大綱 4 第2章 系統架構及控制流程 5 2.1 系統架構 5 2.2 硬體介紹 6 2.2.1 RT605-710-GB六軸關節式機械手臂 6 2.2.2 RealSense SR300 RGB-D深度攝影機 9 2.3 機械臂控制流程 10 第3章 基於卷積神經網路之雙通道控制網路 12 3.1 雙通道控制網路 12 3.2 控制原理 13 3.3 類神經網路 14 3.4 卷積神經網路 15 3.4.1 網路架構 16 3.4.2 卷積層 17 3.4.3 激勵函數 18 3.4.4 池化層 19 3.4.5 平坦層與全連接層 20 3.4.6 損失函數 21 3.4.7 訓練參數及方法 21 3.5 機器學習流程 23 第4章 學習物件之前處理及訓練數據蒐集 24 4.1 工作任務 24 4.1.1 辨識系統流程 26 4.1.2 軟體開發流程 26 4.2 學習物件訓練數據蒐集 27 4.2.1 訓練數據搜集流程 27 4.2.2 訓練數據集範圍 28 4.2.3 單一物件訓練數據類別 28 4.3 數據前處理 31 4.3.1 數據縮放及標準化 31 4.3.2 數據標記 31 第5章 實驗結果與討論 33 5.1 卷積神經網路的訓練實驗 34 5.2 實際應用於手臂之4個自由度模型預測實驗 37 5.3 加入未訓練物件之4個自由度模型預測實驗 45 5.4 實際應用於手臂之2個自由度加4個自由度模型預測實驗 53 5.5 實驗討論 63 第6章 結論與建議 64 6.1 結論 64 6.2 建議 65 參考文獻 66

    [1] S. Hutchinson, G. D. Hager and P. I. Corke, "A tutorial on visual servo control," in IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651-670, Oct. 1996, doi: 10.1109/70.538972.
    [2] A. Al-Shanoon, A. Hao Tan, H. Lang and Y. Wang, "Mobile robot regulation with position based visual servoing," 2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2018, pp. 1-6, doi: 10.1109/CIVEMSA.2018.8439978.
    [3] T. Yüksel, "IBVS with fuzzy sliding mode for robot manipulators," 2015 International Workshop on Recent Advances in Sliding Modes (RASM), 2015, pp. 1-6, doi: 10.1109/RASM.2015.7154590.
    [4] Corneliu Lazar and Adrian Burlacu,“Image-based visual servoing for manipulation via predictive control – A Survey of Some Reslts,” Memoirs of the Scientific Sections of the Romanian Academy,Technical Report,2016
    [5] Q. Chen, S. Zhu, X. Wang and W. Wu, "Analysis on an uncalibrated image-based visual servoing for 6 DOF industrial welding robots," 2012 IEEE International Conference on Mechatronics and Automation, 2012, pp. 2013-2018, doi: 10.1109/ICMA.2012.6285131.
    [6] Hanqi Zhuang, Kuanchih Wang and Z. S. Roth, "Simultaneous calibration of a robot and a hand-mounted camera," in IEEE Transactions on Robotics and Automation, vol. 11, no. 5, pp. 649-660, Oct. 1995, doi: 10.1109/70.466601.
    [7] Y. Kuniyoshi, M. Inaba and H. Inoue, "Learning by watching: extracting reusable task knowledge from visual observation of human performance," in IEEE Transactions on Robotics and Automation, vol. 10, no. 6, pp. 799-822, Dec. 1994, doi: 10.1109/70.338535..
    [8] Yusuke Maeda1 and Takahito Nakamura, View-based teaching/playback for robotic manipulation,ROBOMECH Journal,2:2,pp.1-12,2015.
    [9] K. Hwang, J. Lee, Y. L. Hwang and W. Jiang, "Image base visual servoing base on reinforcement learning for robot arms," 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), 2017, pp. 566-569, doi: 10.23919/SICE.2017.8105453.
    [10] K.Shibata and M.Iida,Acquisition of box pushing by direct vision-based reinforcement learning,Proceeding of SCIE Annual Conference,pp.1378-1383,2003.
    [11] Albert Zhan, Philip Zhao, Lerrel Pinto, Pieter Abbeel, Michael Laskin"A framework for efficient robotic manipulation", arXiv preprint arXiv:2012.07975 [cs.RO]
    [12] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998, doi: 10.1109/5.726791.
    [13] Y.LeCun et al.,“Backpropagation applied to handwritten Zip code recognition“Neural Computation,Vol. 1,No. 4,pp.541-551, 1989.
    [14] I.Lenz,H.Lee and A.Saxena,“Deep learning for dectection robotic grasps,”International Journal of Robotics Research,Vol.34,No. 4-5,pp.705-724,2013.
    [15] J. Redmon and A. Angelova, "Real-time grasp detection using convolutional neural networks," 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 1316-1322, doi: 10.1109/ICRA.2015.7139361.
    [16] K.Simonyan and A.Zisserman,"Two-stream convolutional networks for action
    recognition in videos",arXiv preprint arXiv:1406.2199, 2014 - arxiv.org.

    無法下載圖示 全文公開日期 2023/08/05 (校內網路)
    全文公開日期 2025/08/05 (校外網路)
    全文公開日期 2025/08/05 (國家圖書館:臺灣博碩士論文系統)
    QR CODE