簡易檢索 / 詳目顯示

研究生: 劉建余
Chien-Yu Liu
論文名稱: 以機械手臂搭配幾何模型分析及深度學習技術進行簡易零件之吸取
Manipulator-based Grasping of Simple Parts Using Geometric Model Analysis and Deep Learning Techniques
指導教授: 林清安
Ching-An Lin
口試委員: 李維楨
Wei-Chen Lee
郭俊良
Chun-Liang Kuo
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 105
中文關鍵詞: 自動化組裝機械手臂3D CAD深度學習影像處理
外文關鍵詞: Automatic assembly, Robot arm, 3D CAD, Deep learning, Image processing
相關次數: 點閱:357下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  • 摘要 Abstract 致謝 目錄 圖目錄 第一章 緒論 1.1 研究動機與目的 1.2 研究方法 1.3 文獻探討 1.4 論文架構 第二章 以3D CAD模型求取零件的吸取點 2.1 利用機器視覺找出零件吸取點之基本方法 2.2 面的篩選 2.3 以射線原理分析適當吸取點 2.3.1 在篩選平面上分布參考點 2.3.2 以射線法求取吸取點 2.3.3 以零件重心取得最終吸取點 2.3.4 輸出吸取點之資料 2.3.5 分析程式之合理性 2.3.6 本論文無法處理之情況 第三章 以深度學習與影像處理求取零件之實際吸取點 3.1 以深度學習辨識零件名稱 3.1.1 以深度學習訓練圖片之前置作業 3.1.2 開始訓練 3.1.3 輸出辨識零件名稱之結果 3.2 以影像處理求取零件實際吸取點 3.2.1 運用二值化處理圖片 3.2.2 求取各零件之最小外接矩形和中心點 3.2.3 結合深度學習配對零件名稱 3.2.4 求取零件之旋轉角度 3.2.5 方位辨別 3.2.6 求取各零件之實際吸取點 第四章 系統開發 4.1 系統運作規劃 4.2 硬體架構 4.2.1 EPSON機械手臂 4.2.2 機械手臂吸嘴 4.2.3 工業相機 4.3 系統環境及軟體開發工具 4.3.1 系統環境 4.3.2 Creo Toolkit之簡介 4.3.3 OpenCV Library之簡介 4.3.4 EPSON Robot API之簡介 4.3.5 Basler API之簡介 4.3.6 Anaconda Python環境之簡介 第五章 實例驗證 5.1 前置作業 5.1.1 機械手臂和影像座標轉換 5.1.2 Creo與手臂座標系轉換 5.2 取得組裝之路徑 5.3 取得零件之吸取點 5.4 機械手臂執行組裝 第六章 結論與未來研究方向 6.1 結論 6.2 未來研究方向 參考文獻

    [1] H.Y. Jang, H. Moradi, S. Hong, S. Lee and J. Han(2006), “Spatial Reasoning for Real-time Robotic Manipulation,” Proceedings of Institute of Electrical and Electronics Engineers International Conference on Intelligent Robots and Systems, October 9-15, Beijing, China.
    [2] 蔡仕晟,「擬人機械手臂基於影像之適應性抓取設計」(2011),碩士論文,交通大學電控工程學系,台北市。
    [3] J. Romero(2011), “From Human to Robot Grasping,” Doctoral Thesis in School of Computer Science and Communication, Stockholm, Sweden.
    [4] C. Eppner, S. Hofer, R. Hofer, R. Jonschkowski, A.S. Martın-Martın, V. Wall and O. Brock(2016), “Lessons from the amazon picking challenge: Four aspects of building robotic systems,” in Rich Site Summary.
    [5] M.Y. Liu, O. Tuzel, A. Veeraraghavan, Y. Taguchi, T. K. Marks and R. Chellappa(2016), “Fast object localization and pose estimation in heavy clutter for robotic bin picking,” in International Journal of Robotics Research.
    [6] H.W. Wang, Z.H. Zhang, J. Sun and G.J. Yu(2018), “Research and Application of Vision Intelligent Assembly Robot Based on HALCON Software,” in Institute of Electrical and Electronics Engineers.
    [7] D.T. Le, M. Andulkar, W. Zou, J.P. Städter and U. Berger (2016), “Self Adaptive System for Flexible Robot Assembly Operation,” in Institute of Electrical and Electronics Engineers.
    [8] A. Zeng, K.T. Yu, S. Song, D. Suo, E. Walker, A. Rodriguez and J. Xiao (2017), “Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge,” in Institute of Electrical and Electronics Engineers/ International Conference on Robotics and Automation, Singapore, pp. 1383-1386.
    [9] G. E. Pazienza, P. Giangrossi, S. Tortella, M. Balsi and X. Vilasis-Cardona (2005), “Tracking for a CNN guided robot,” Proceedings of the European Conference on Circuit Theory and Design, pp. III/77-III/80 vol. 3.
    [10] M. Bertozzi and A. Broggi(1998), “GOLD: A Parallel Real-Time Stereo Vision System for Generic Obstacle and Lane Detection”, in Institute of Electrical and Electronics Engineers, vol. 7, No. 1, pp. 62–81.
    [11] H. Kim, A. Roska, L.O. Chua, F. Werblin(2003), “Automatic Detection and Tracking of Moving Image Target with CNN-UM via Target Probability Fusion of Multiple Features”, International Journal of Circuit Theory and Applications, vol. 31, pp. 329–346.
    [12] E. Martinson and V. Yalla(2016), “Real-time human detection for robots using CNN with a feature-based layered pre-filter,” Institute of Electrical and Electronics Engineers International Symposium on Robot and Human Interactive Communication, New York, NY, 2016, pp. 1120-1125.
    [13] X. Peng, B. Sun, K. Ali and K. Saenko(2015), “Learning deep object Detectors from 3D Models,” Institute of Electrical and Electronics Engineers International Conference on Computer Vision, Santiago, 2015, pp. 1278-1286.
    [14] S. Ren, K. He, R. Girshick and J. Sun(2017), “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” in Institute of Electrical and Electronics Engineers Transactions on Pattern Analysis and Machine Intelligence, vol. 39, No. 6, pp. 1137-1149.
    [15] E. Shelhamer, J. Long and T. Darrell(2017), “Fully Convolutional Networks for Semantic Segmentation,” in in Institute of Electrical and Electronics Engineers Transactions on Pattern Analysis and Machine Intelligence, vol. 39, No. 4, pp. 640-651, April 1.
    [16] J. Redmon, S. Divvala, R. Girshick and A. Farhadi(2015), “You only look once: Unified, real-time object detection,” arXiv preprint arXiv:1506.02640.

    無法下載圖示 全文公開日期 2025/01/08 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE