簡易檢索 / 詳目顯示

研究生: 陳彥霖
Yan-Lin Chen
論文名稱: 基於自主移動操作之機器人繪圖研究
A Study of Robotic Drawing with Autonomous Mobile Manipulation
指導教授: 蘇順豐
Shun-Feng Su
郭重顯
Chung-Hsien Kuo
口試委員: 宋開泰
Kai-Tai Song
李宇修
Yu-Hsiu Lee
劉孟昆
Meng-Kun Liu
蘇順豐
Shun-Feng Su
郭重顯
Chung-Hsien Kuo
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 85
中文關鍵詞: 立體視覺AprilTags繪圖機器人接續繪圖
外文關鍵詞: Stereo vision, AprilTags, Drawing robot, Continuous drawing
相關次數: 點閱:157下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在過去的研究中,使用機器人進行繪圖行為的研究已發展許久,但大多是固定機器人底座與畫布位置的研究方式。若當下繪圖的位置超出機械手臂的工作空間或者畫布被移動過後,都將無法繼續進行繪圖,且大多數繪圖位置都在預先設定好的平面,對於環境的要求都十分嚴謹。為了解決這些問題,本文提出了一個具有機械手臂的移動機器人之可連續繪圖系統,使機器人具有非定點的繪圖能力,透過移動機器人的移動使手臂工作空間更大。
    本文提出的接續繪圖系統是將待繪製圖像配合機械手臂工作空間分割為若干份,以深度學習模型的方法將不同粗細的線段,處理成均一的粗細,並利用連通標記法(Component Labeling)的方法將線段進行分類並依序生成繪製路徑後,以AprilTags視覺基準輔助系統,進行第一部分的基礎定位。第一部分繪製完畢後,移動機器人採用LiDAR SLAM(Simultaneous Localization and Mapping)進行定位導航至下一個繪圖位置後,使用本文提出的銜接點搜尋系統生成圖像銜接點位,並選擇一個最好的銜接點,在圖紙或機器人移動後利用ZED 2立體視覺相機進行銜接點的相機座標計算,再透過相機與手臂的幾何轉換式轉到手臂座標,以實現接續繪圖的功能。本文機器人繪圖系統使用機器人操作系統(ROS)來開發和集成多個子系統。


    In past research, the use of robots for drawing behavior has long been developed, but most of the methods are research methods that fix the position of the robot base and the canvas. Suppose the current drawing position exceeds the working space of the robot manipulator or the canvas is moved. In that case, the drawing cannot continue, because most drawing positions are on the preset plane, and the requirements for the environment are stringent. To solve these problems, this paper proposes a continuous drawing system for a mobile robot with a robot manipulator, which enables the robot to have nonfixed-point drawing capabilities, making the robot manipulator workspace larger through the movement of the mobile robot. The continuous drawing system proposed in this paper divides the image to be drawn to cooperate with the workspace of the robot manipulator into several parts, uses the deep learning model to process line segments of different thicknesses into a uniform thickness, uses the connected component labeling to classify the line segments and generate the drawing paths in sequence and uses the AprilTags visual fiducial system to perform the basic positioning of the first part. After the first part is drawn, the mobile robot uses light detection and ranging (LiDAR) simultaneous localization and mapping to locate and navigate to the next drawing location. This paper proposes a connecting point search system to generate image connecting points and choose the best connecting point. A ZED 2 stereo camera is used to calculate the camera coordinates of the connecting point after the canvas of the AGV is moved. A ZED 2 stereo camera is used to calculate the camera coordinates of the connecting point after the canvas of the automated guided vehicle (AGV) is moved. Then, through the geometric conversion between the camera and the arm, the camera is transferred to the arm coordinates to realize the function of continuous drawing. In this study, the robot drawing system uses the robot operating system (ROS) to develop and integrate multiple subsystems.

    指導教授推薦書..............................................................I 口試委員會審定書............................................................II 誌謝.......................................................................III 摘要.......................................................................IV ABSTRACT...................................................................V LIST OF TABLES.............................................................X LIST OF FIGURES............................................................XI NOMENCLATURE...............................................................XIV CHAPTER 1 INTRODUCTION.....................................................1 1.1 BACKGROUND AND MOTIVATION..............................................1 1.2 LITERATURE REVIEW......................................................4 1.2.1 Related Studies on Drawing Robot.....................................4 1.2.2 Related Studies on Thinning Algorithm................................5 1.2.2 Related Studies on ZED2 Stereo Camera................................6 1.2.3 Related Studies on AprilTags.........................................6 1.2.4 Related Studies on Path Planning.....................................7 1.3 THESIS STRUCTURE.......................................................8 CHAPTER 2 SYSTEM ARCHITECTURE AND RESEARCH METHODS.........................9 2.1 SYSTEM ARCHITECTURE....................................................9 2.2 HARDWARE ARCHITECTURE..................................................10 2.2.1 Mobile Platform Hardware Design......................................11 2.2.2 Manipulators Platform Hardware Introduction..........................15 2.3 ROBOT OPERATING SYSTEM.................................................19 2.4 SIMULTANEOUS LOCALIZATION AND MAPPING TECHNOLOGY.......................20 CHAPTER 3 IMAGE SPLIT SYSTEM...............................................22 3.1 IMAGE PREPROCESSING....................................................22 3.1.1 Sobel Edge Detection.................................................23 3.1.2 Line Width Normalization.............................................24 3.2 SEARCHING FOR CONNECTING POINTS IN IMAGE SPLIT SYSTEM..................26 3.3 IMAGE SPLIT SCORING....................................................27 CHAPTER 4 CONTINUOUS DRAWING SYSTEM........................................29 4.1 PATH PLANNING..........................................................29 4.2 APRILTAGS POSE ESTIMATION..............................................32 4.2.1 Detecting line segments..............................................34 4.2.2 Quad detection.......................................................34 4.2.3 Homography and extrinsics estimation.................................35 4.2.4 ID Decode............................................................36 4.3 STEREO VISION POSITIONING..............................................36 4.4 SEARCHING FOR CONNECTING POINTS IN ACTUAL DRAWING......................38 4.4.1 HSV Color Space......................................................39 4.4.2 Contour Extraction...................................................40 4.4.3 Angle Error Calculation..............................................40 4.4.4 Find Actual Connecting Points........................................41 4.4.5 Connecting Point Match...............................................43 4.5 COORDINATE CONVERSION..................................................45 CHAPTER 5 EXPERIMENTAL RESULTS.............................................47 5.1 CONTINUOUS DRAWING BASED ON APRILTAGS..................................48 5.2 CONTINUOUS DRAWING ACCURACY EXPERIMENT.................................50 5.2.1 Translational Movement Experiment....................................50 5.2.2 Rotation Experiment..................................................55 5.3 CONTINUOUS DRAWING EXPERIMENT..........................................60 5.4 CONTINUOUS DRAWING WITH MOBILE ROBOT EXPERIMENT........................66 CHAPTER 6 CONCLUSIONS AND FUTURE WORK......................................67 6.1 CONCLUSION.............................................................67 6.2 FUTURE WORK............................................................67 REFERENCE..................................................................68

    [1] Sony Aibo Dog | https://us.aibo.com/
    [2] Y. Ota, "Partner robots — From development to implementation," 3rd International Conference on Human System Interaction, 2010, pp. 14-16
    [3] C. Breazeal and B. Scassellati, "How to build robots that make friends and influence people," Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), 1999, pp. 858-
    [4] R. Liu, W. Wan, K. Koyama and K. Harada, "Robust Robotic 3-D Drawing Using Closed-Loop Planning and Online Picked Pens," in IEEE Transactions on Robotics, vol. 38, no. 3, pp. 1773-1792, June 2022
    [5] Scalera, L.; Seriani, S.; Gallina, P.; Lentini, M.; Gasparetto, A. Human–Robot Interaction through Eye Tracking for Artistic Drawing. Robotics 2021, 10, 54.
    [6] A. Kotani and S. Tellex, "Teaching Robots To Draw," 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 4797-4803
    [7] ZKM <The Big Picture> | http://www.robotlab.de/thebig/expos_en.htm
    [8] P. McCorduck, AARON’S CODE: Meta-Art Artificial Intelligence and the Work of Harold Cohen. W. H. Freeman & Co, 1990.
    [9] S. Calinon, J. Epiney and A. Billard, "A humanoid robot drawing human portraits," 5th IEEE-RAS International Conference on Humanoid Robots, 2005., 2005, pp. 161-166
    [10] D. Song, T. Lee and Y. J. Kim, "Artistic Pen Drawing on an Arbitrary Surface Using an Impedance-Controlled Robot," 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 4085-4090
    [11] D. Song and Y. J. Kim, "Distortion-free Robotic Surface-drawing using Conformal Mapping," 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 627-633
    [12] R. C. Luo, M. -J. Hong and P. -C. Chung, "Robot Artist for colorful picture painting with visual control system," 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 2998-3003
    [13] Zhang, T. Y. and Suen, Ching Y.. "A Fast Parallel Algorithm for Thinning Digital Patterns.." Commun. ACM 27 , no. 3 (1984): 236-239.
    [14] V. Tadić et al., "Application of the ZED Depth Sensor for Painting Robot Vision System Development," in IEEE Access, vol. 9, pp. 117845-117859, 2021
    [15] E. L. Ortiz, V. E. Cabrera and M. L. Goncalves, "Depth data error modeling of the ZED 3D vision sensor from stereolabs", Electron. Lett. Comput. Vis. Image Anal., vol. 17, no. 1, pp. 1-15, 2018.
    [16] E. Olson, "AprilTag: A robust and flexible visual fiducial system," 2011 IEEE International Conference on Robotics and Automation, 2011, pp. 3400-3407
    [17] J. Wang and E. Olson, "AprilTag 2: Efficient and robust fiducial detection," 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 4193-4198
    [18] Tsai P-S, Wu T-F, Chen J-Y, Lee F-H. Drawing System with Dobot Magician Manipulator Based on Image Processing. Machines. 2021; 9(12):302.
    [19] G. Jean-Pierre and Z. Saïd, "The artist robot: A robot drawing like a human artist," 2012 IEEE International Conference on Industrial Technology, 2012, pp. 486-491
    [20] H. Kang, S. Lee and C. Chui, “Coherent Line Drawing,” Proc. ACM Symposium on Non-photorealistic Animation and Rendering, pp. 43-50, 2007.
    [21] Sobel, Irwin, and Gary Feldman. "A 3x3 isotropic gradient operator for image processing." a talk at the Stanford Artificial Project in (1968): 271-272.
    [22] Inker, Smart. "Real-Time Data-Driven Interactive Rough Sketch Inking." (2018).
    [23] Suzuki, Satoshi. "Topological structural analysis of digitized binary images by border following." Computer vision, graphics, and image processing 30.1 (1985): 32-46.
    [24] Robot Art | https://robotart.org/
    [25] L. He, Y. Chao and K. Suzuki, "A Run-Based Two-Scan Labeling Algorithm," in IEEE Transactions on Image Processing, vol. 17, no. 5, pp. 749-756, May 2008,
    [26] H. Özer, B. Sankur and N. Memon, "Robust audio hashing for audio identification," 2004 12th European Signal Processing Conference, 2004, pp. 2091-2094.

    無法下載圖示 全文公開日期 2032/08/17 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE