研究生: |
劉建余 Chien-Yu Liu |
---|---|
論文名稱: |
以機械手臂搭配幾何模型分析及深度學習技術進行簡易零件之吸取 Manipulator-based Grasping of Simple Parts Using Geometric Model Analysis and Deep Learning Techniques |
指導教授: |
林清安
Ching-An Lin |
口試委員: |
李維楨
Wei-Chen Lee 郭俊良 Chun-Liang Kuo |
學位類別: |
碩士 Master |
系所名稱: |
工程學院 - 機械工程系 Department of Mechanical Engineering |
論文出版年: | 2020 |
畢業學年度: | 108 |
語文別: | 中文 |
論文頁數: | 105 |
中文關鍵詞: | 自動化組裝 、機械手臂 、3D CAD 、深度學習 、影像處理 |
外文關鍵詞: | Automatic assembly, Robot arm, 3D CAD, Deep learning, Image processing |
相關次數: | 點閱:544 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
[1] H.Y. Jang, H. Moradi, S. Hong, S. Lee and J. Han(2006), “Spatial Reasoning for Real-time Robotic Manipulation,” Proceedings of Institute of Electrical and Electronics Engineers International Conference on Intelligent Robots and Systems, October 9-15, Beijing, China.
[2] 蔡仕晟,「擬人機械手臂基於影像之適應性抓取設計」(2011),碩士論文,交通大學電控工程學系,台北市。
[3] J. Romero(2011), “From Human to Robot Grasping,” Doctoral Thesis in School of Computer Science and Communication, Stockholm, Sweden.
[4] C. Eppner, S. Hofer, R. Hofer, R. Jonschkowski, A.S. Martın-Martın, V. Wall and O. Brock(2016), “Lessons from the amazon picking challenge: Four aspects of building robotic systems,” in Rich Site Summary.
[5] M.Y. Liu, O. Tuzel, A. Veeraraghavan, Y. Taguchi, T. K. Marks and R. Chellappa(2016), “Fast object localization and pose estimation in heavy clutter for robotic bin picking,” in International Journal of Robotics Research.
[6] H.W. Wang, Z.H. Zhang, J. Sun and G.J. Yu(2018), “Research and Application of Vision Intelligent Assembly Robot Based on HALCON Software,” in Institute of Electrical and Electronics Engineers.
[7] D.T. Le, M. Andulkar, W. Zou, J.P. Städter and U. Berger (2016), “Self Adaptive System for Flexible Robot Assembly Operation,” in Institute of Electrical and Electronics Engineers.
[8] A. Zeng, K.T. Yu, S. Song, D. Suo, E. Walker, A. Rodriguez and J. Xiao (2017), “Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge,” in Institute of Electrical and Electronics Engineers/ International Conference on Robotics and Automation, Singapore, pp. 1383-1386.
[9] G. E. Pazienza, P. Giangrossi, S. Tortella, M. Balsi and X. Vilasis-Cardona (2005), “Tracking for a CNN guided robot,” Proceedings of the European Conference on Circuit Theory and Design, pp. III/77-III/80 vol. 3.
[10] M. Bertozzi and A. Broggi(1998), “GOLD: A Parallel Real-Time Stereo Vision System for Generic Obstacle and Lane Detection”, in Institute of Electrical and Electronics Engineers, vol. 7, No. 1, pp. 62–81.
[11] H. Kim, A. Roska, L.O. Chua, F. Werblin(2003), “Automatic Detection and Tracking of Moving Image Target with CNN-UM via Target Probability Fusion of Multiple Features”, International Journal of Circuit Theory and Applications, vol. 31, pp. 329–346.
[12] E. Martinson and V. Yalla(2016), “Real-time human detection for robots using CNN with a feature-based layered pre-filter,” Institute of Electrical and Electronics Engineers International Symposium on Robot and Human Interactive Communication, New York, NY, 2016, pp. 1120-1125.
[13] X. Peng, B. Sun, K. Ali and K. Saenko(2015), “Learning deep object Detectors from 3D Models,” Institute of Electrical and Electronics Engineers International Conference on Computer Vision, Santiago, 2015, pp. 1278-1286.
[14] S. Ren, K. He, R. Girshick and J. Sun(2017), “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” in Institute of Electrical and Electronics Engineers Transactions on Pattern Analysis and Machine Intelligence, vol. 39, No. 6, pp. 1137-1149.
[15] E. Shelhamer, J. Long and T. Darrell(2017), “Fully Convolutional Networks for Semantic Segmentation,” in in Institute of Electrical and Electronics Engineers Transactions on Pattern Analysis and Machine Intelligence, vol. 39, No. 4, pp. 640-651, April 1.
[16] J. Redmon, S. Divvala, R. Girshick and A. Farhadi(2015), “You only look once: Unified, real-time object detection,” arXiv preprint arXiv:1506.02640.