簡易檢索 / 詳目顯示

研究生: ALAA MOHAMED MAHMOUD AHMED
ALAA MOHAMED MAHMOUD AHMED
論文名稱: 3D物件重建技術和在軌跡跟循、機器人導航和自動光學檢測之應用
Three-dimensional Object Reconstruction Implementation in Trajectory Following, Robot Navigation and Automatic Optical Inspection
指導教授: 林其禹
Chyi-Yeu Lin
口試委員: 邱士軒
郭重顯
林沛群
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 英文
論文頁數: 99
中文關鍵詞: 3D reconstructionSLAMTrajectoryInspection3D scanning
外文關鍵詞: 3D reconstruction, SLAM, Trajectory, Inspection, 3D scanning
相關次數: 點閱:313下載:8
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 真實物體的三維(3D)重建對無論是檢測、導航、醫療、逆向工程、安全性、物件辨識、可視化和動畫皆為非常重要且必要的研究領域。它可以通過不同的傳感器以不同的方法實現。事實上,如何選擇最佳的重建方法和傳感器並非容易的一件事情。它需要大量的經驗與試誤。在這項研究中,已經開發了三種3D重建應用。它們中的每一個都採用了不同的技術,傳感器和理論。

    第一個應用程序以兩個校準的2D相機使用立體視覺理論識別和追蹤筆上的標記。在本研究中,已經可以精確地檢測並追蹤標記位置的中心。且演算法的限制和誤差模型已經推導完成。該應用程序可以在將來使用,但是將這些2D攝像機附加到機器人以跟踪人手軌跡以用於遠程過程。

    第二個應用程序使用KINECT®傳感器來執行即時定位與地圖構建(SLAM)導航。本研究成功開發了適用於室內場景並基於圖形的SLAM。通過在環境中移動手持相機,程序可以檢測到環境中的特徵點,然後匹配這些特徵點並估算其3D位置。之後,對齊生成的3D雲並縫合其輪廓以構建此環境的3D地圖。該實驗可以應用在導航機器人上,使其能掃描各種室內環境,包括較危險的場景。

    第三個應用程序重建伺服器主機的3D模型,並根據所選的虛擬攝像機位置呈現其2D圖像。然後,將真實的相機移動到這些選定位置並拍攝真實影像。透過比較真實和虛擬的圖像來評估系統準確性。這個想法是自動執行自動光學檢測(AOI)為一個真正的產

    業案例。主要目標是檢查生產的伺服器主機上的組裝成果,標籤位置和零件放置位置是否正確。開發這種自動檢測系統將使檢查過程更快,更容易和更具經濟效益。

    這些研究為工業領域的各種機器人視覺應用打開了大門。並同時針對特定的應用給予感測器與重建方式在選擇上的建議。


    The three-dimensional (3D) reconstruction of real objects is a very important research sector required for inspection, navigation, medical and reverse engineering, security, object identifications, visualizations, and animations. It can be achieved via different methods through different sensors. In fact, choosing the optimal reconstruction method and sensor is not an easy task. It requires a lot of experience and trials and errors. In this work, three applications for 3D reconstruction have been developed. Each of them has been implemented with a different technique, sensor, and theory.

    The first application recognizes and tracks a pen’s marker using stereo vision theory via two calibrated 2D cameras. The center of the marker position has been precisely detected and followed. Algorithm’s limitations and errors' models have been determined. This application could be utilized in the future through attaching these 2D cameras to a robot to follow human hand trajectory for remote processes.

    The second application uses the KINECT® sensor to perform Simultaneous Localization and Mapping (SLAM) navigation. Graph-based SLAM has been developed in an indoor scenario. By moving a hand-held camera around the environment, features have been detected. Then, matching these features and estimating their 3D positions. After that, align the generated 3D clouds and close the loop to construct the 3D map for this environment. This experiment could be applied by a navigation robot to scan various indoor environments including dangerous ones.

    The third application reconstructs the 3D model for a server and renders its 2D images in respect to selected virtual camera positions. Then, moves a real camera to these selected positions and captures real images. Comparing the real and virtual images, system accuracy has been assessed. This idea was automated to perform an Automatic Optical Inspection (AOI) for a real industry case. The main target is to inspect the assembling, labeling and component placement on a manufactured server. Developing such an autonomous inspection system makes the inspection process faster, easier and cost-efficient.

    These studies open the door for various robotic vision applications in industrial fields. They also promote the functions of the sensors and reconstruction methods for specific applications.

    CHAPTER 1: INTRODUCTION ………………………………………….. 1 CHAPTER 2: LITERATURE REVIEW 2.1. First application …………………………………………………..…… 13 2.1.1. Camera calibration ……………………………………………. 14 2.2. Second application …………………………………………………..… 21 2.2.2. SURF algorithm ………………………………………………. 22 2.2.3. KNN algorithm ……………………………………………….. 25 2.2.4. RANSAC algorithm ………………………………………….. 26 2.2.5. ICP algorithm ………………………………………………… 29 2.2.6. ELCH algorithm ……………………………………………… 34 2.2.7. KINECT camera………………………………………………. 36 2.3. Third application …………………………………………………..….. 38 CHAPTER 3: MARKER DETECTION AND TRACKING USING STEREO VISION TECHNIQUE 3.1. Algorithm Steps …………………………………………………..…… 41 3.1.1. Camera calibration 3.1.2. Marker detection 3.1.3. Marker tracking 3.2. Experiments …………………………………………………..…….… 42 3.2.1. Cameras 3.2.2. Markers 3.3. Results ……………………………………………………….………… 43 CHAPTER 4: SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING STRUCTURED LIGHT 4.1. Algorithm Steps ……………………………….………………………….. 53 4.1.1. Features detection and matching 4.1.2. 3D position estimation 4.1.3. 3D clouds alignment 4.1.4. Loop closure 4.1.5. Different environments testing 4.2. Experiments …………………………………………………………….…. 55 4.3. Results ……………………………………………………………..………. 55 CHAPTER 5: INDUSTRIAL COMPONENTS INSPECTION USING LASER TRIANGULATION 5.1. Algorithm Steps ………………………………………………………...…. 62 5.1.1. 3D scanning 5.1.2. 3D rendering 5.2. Experiments ……………………………………………………………….. 64 5.3. Results ……………………………………………………….…….………. 65 CHAPTER 6: CONCLUSIONS …………………………………………………..… 76 REFERENCES …………………………………………………………………………. 78

    [1] F. Remondino and S. El‐Hakim, "Image‐based 3D modelling: a review," The
    photogrammetric record, vol. 21, no. 115, pp. 269-291, 2006.

    [2] J. C. Carr et al., "Reconstruction and representation of 3D objects with radial basis
    functions," in Proceedings of the 28th annual conference on Computer graphics and
    interactive techniques, 2001, pp. 67-76: ACM.

    [3] A. Myronenko and X. Song, "Point set registration: Coherent point drift," IEEE
    transactions on pattern analysis and machine intelligence, vol. 32, no. 12, pp. 2262-2275,2010.

    [4] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, Surface
    reconstruction from unorganized points (no. 2). ACM, 1992.

    [5] K. Hormann and M. Reimers, "Triangulating point clouds with spherical topology,"
    Curve and Surface Design: Saint-Malo, pp. 215-224, 2002.

    [6] M. Zwicker and C. Gotsman, "Meshing Point Clouds Using Spherical Parameterization,"
    in SPBG, 2004, pp. 173-180.

    [7] L. Zhang, L. Liu, C. Gotsman, and H. Huang, "Mesh reconstruction by meshless
    denoising and parameterization," Computers & Graphics, vol. 34, no. 3, pp. 198-208,
    2010.

    [8] S. F. El-Hakim, J.-A. Beraldin, M. Picard, and G. Godin, "Detailed 3D reconstruction of large-scale heritage sites with integrated techniques," IEEE Computer Graphics and
    Applications, vol. 24, no. 3, pp. 21-29, 2004.

    [9] H. Shum and S. B. Kang, "Review of image-based rendering techniques," in Visual
    Communications and Image Processing 2000, 2000, vol. 4067, pp. 2-14: International
    Society for Optics and Photonics.

    [10] P. E. Debevec, C. J. Taylor, and J. Malik, "Modeling and rendering architecture from
    photographs: A hybrid geometry-and image-based approach," in Proceedings of the 23rd
    annual conference on Computer graphics and interactive techniques, 1996, pp. 11-20:
    ACM.

    [11] C. A. Taylor and D. A. Steinman, "Image-based modeling of blood flow and vessel wall
    dynamics: applications, methods and future directions," Annals of biomedical
    engineering, vol. 38, no. 3, pp. 1188-1203, 2010.

    [12] A. Criminisi, I. Reid, and A. Zisserman, "Single view metrology," in iccv, 1999, p. 434:IEEE.

    [13] P. Eisert and J. Rurainsky, "Geometry-assisted image-based rendering for facial analysis and synthesis," Signal Processing: Image Communication, vol. 21, no. 6, pp. 493-505, 2006.

    [14] Y.-Y. Chuang, "Image-based modeling," 2005.

    [15] M. Lo Brutto and M. Spera, "Image-based and range-based 3D modelling of
    archaeological cultural heritage: the Telamon of the Temple of Olympian Zeus in
    Agrigento (Italy)," International Archives of the Photogrammetry, Remote Sensing and
    Spatial Information Sciences, vol. 38, no. 5/W1, pp. 515-522, 2011.

    [16] S. B. Gokturk, H. Yalcin, and C. Bamji, "A time-of-flight depth sensor-system
    description, issues and solutions," in Computer Vision and Pattern Recognition
    Workshop, 2004. CVPRW'04. Conference on, 2004, pp. 35-35: IEEE.

    [17] S. Foix, G. Alenya, and C. Torras, "Lock-in time-of-flight (ToF) cameras: A survey,"
    IEEE Sensors Journal, vol. 11, no. 9, pp. 1917-1926, 2011.

    [18] Y. Cui, S. Schuon, D. Chan, S. Thrun, and C. Theobalt, "3D shape scanning with a timeof-flight camera," in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE
    Conference on, 2010, pp. 1173-1180: IEEE.

    [19] A. Kolb, E. Barth, R. Koch, and R. Larsen, "Time-of-Flight Sensors in Computer
    Graphics," in Eurographics (STARs), 2009, pp. 119-134.

    [20] D. R. Wiese, "Laser Triangulation Sensors," Quality, vol. 28, no. 1, p. 46, 1989.

    [21] H. Zhang, Y. Ren, C. Liu, and J. Zhu, "Flying spot laser triangulation scanner using
    lateral synchronization for surface profile precision measurement," Applied optics, vol.
    53, no. 20, pp. 4405-4412, 2014.

    [22] J. Geng, "Structured-light 3D surface imaging: a tutorial," Advances in Optics and
    Photonics, vol. 3, no. 2, pp. 128-160, 2011.

    [23] D. Scharstein and R. Szeliski, "High-accuracy stereo depth maps using structured light," in Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer
    Society Conference on, 2003, vol. 1, pp. I-I: IEEE.

    [24] N. Lazaros, G. C. Sirakoulis, and A. Gasteratos, "Review of stereo vision algorithms: from software to hardware," International Journal of Optomechatronics, vol. 2, no. 4, pp.435-462, 2008.

    [25] B. D. Lucas and T. Kanade, "An iterative image registration technique with an
    application to stereo vision," 1981.

    [26] Y. Matsumoto and A. Zelinsky, "An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement," in fg, 2000, p. 499: IEEE.

    [27] M. Humenberger, T. Engelke, and W. Kubinger, "A census-based stereo vision algorithm
    using modified semi-global matching and plane fitting to improve matching quality," in
    Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer
    Society Conference on, 2010, pp. 77-84: IEEE.

    [28] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on
    pattern analysis and machine intelligence, vol. 22, 2000.

    [29] Z. Zhang, "Flexible camera calibration by viewing a plane from unknown orientations," in Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, 1999, vol. 1, pp. 666-673: Ieee.

    [30] R. Jain, R. Kasturi, and B. G. Schunck, Machine vision. McGraw-Hill New York, 1995.

    [31] H. Durrant-Whyte and T. Bailey, "Simultaneous localization and mapping: part I," IEEE robotics & automation magazine, vol. 13, no. 2, pp. 99-110, 2006.

    [32] T. Bailey and H. Durrant-Whyte, "Simultaneous localization and mapping (SLAM): Part
    II," IEEE Robotics & Automation Magazine, vol. 13, no. 3, pp. 108-117, 2006.

    [33] H. Choset and K. Nagatani, "Topological simultaneous localization and mapping
    (SLAM): toward exact localization without explicit localization," IEEE Transactions on
    robotics and automation, vol. 17, no. 2, pp. 125-137, 2001.

    [34] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, "FastSLAM: A factored solution
    to the simultaneous localization and mapping problem," Aaai/iaai, vol. 593598, 2002.

    [35] M. G. Dissanayake, P. Newman, S. Clark, H. F. Durrant-Whyte, and M. Csorba, "A
    solution to the simultaneous localization and map building (SLAM) problem," IEEE
    Transactions on robotics and automation, vol. 17, no. 3, pp. 229-241, 2001.

    [36] A. Eliazar and R. Parr, "DP-SLAM: Fast, robust simultaneous localization and mapping
    without predetermined landmarks," in IJCAI, 2003, vol. 3, pp. 1135-1142: Acapulco,
    Mexico.

    [37] H. Gahbiche, "Simultaneous Localization And Mapping (SLAM)."

    [38] K. Berns and E. von Puttkamer, "Simultaneous localization and mapping (SLAM)," in
    Autonomous Land Vehicles: Springer, 2009, pp. 146-172.

    [39] Z. Zhang, "Microsoft kinect sensor and its effect," IEEE multimedia, vol. 19, no. 2, pp. 4-10, 2012.

    [40] K. Khoshelham and S. O. Elberink, "Accuracy and resolution of kinect depth data for
    indoor mapping applications," Sensors, vol. 12, no. 2, pp. 1437-1454, 2012.

    [41] J. Han, L. Shao, D. Xu, and J. Shotton, "Enhanced computer vision with microsoft kinect sensor: A review," IEEE transactions on cybernetics, vol. 43, no. 5, pp. 1318-1334, 2013.

    [42] A. Oliver, S. Kang, B. C. Wünsche, and B. MacDonald, "Using the Kinect as a
    navigation sensor for mobile robotics," in Proceedings of the 27th conference on image
    and vision computing New Zealand, 2012, pp. 509-514: ACM.

    [43] N. Engelhard, F. Endres, J. Hess, J. Sturm, and W. Burgard, "Real-time 3D visual SLAM with a hand-held RGB-D camera," in Proc. of the RGB-D Workshop on 3D Perception in
    Robotics at the European Robotics Forum, Vasteras, Sweden, 2011, vol. 180, pp. 1-15.

    [44] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, "A benchmark for the
    evaluation of RGB-D SLAM systems," in Intelligent Robots and Systems (IROS), 2012
    IEEE/RSJ International Conference on, 2012, pp. 573-580: IEEE.

    [45] J. Sturm et al., "Towards a benchmark for RGB-D SLAM evaluation," in RGB-D
    Workshop on Advanced Reasoning with Depth Cameras at Robotics: Science and
    Systems Conf.(RSS), 2011.

    [46] H.-y. Chen and C.-y. Lin, "RGB-D sensor based real-time 6DoF-SLAM," in Advanced
    Robotics and Intelligent Systems (ARIS), 2014 International Conference on, 2014, pp. 61-
    65: IEEE.

    [47] O. Wasenmüller and D. Stricker, "Comparison of kinect v1 and v2 depth images in terms of accuracy and precision," in Asian Conference on Computer Vision, 2016, pp. 34-45:Springer.

    [48] G. Grisetti, R. Kummerle, C. Stachniss, and W. Burgard, "A tutorial on graph-based
    SLAM," IEEE Intelligent Transportation Systems Magazine, vol. 2, no. 4, pp. 31-43,
    2010.

    [49] R. pose Constraint, "Graph-Based SLAM."

    [50] H. Bay, T. Tuytelaars, and L. Van Gool, "Surf: Speeded up robust features," in European conference on computer vision, 2006, pp. 404-417: Springer.

    [51] H. Baya, A. Essa, T. Tuytelaarsb, and L. Van Goola, "Speeded-up robust features
    (SURF)," Computer vision and image understanding, vol. 110, no. 3, pp. 346-359, 2008.

    [52] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model
    fitting with applications to image analysis and automated cartography," Communications
    of the ACM, vol. 24, no. 6, pp. 381-395, 1981.

    [53] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, "Speeded-up robust features (SURF),"
    Computer vision and image understanding, vol. 110, no. 3, pp. 346-359, 2008.

    [54] K. G. Derpanis, "Overview of the RANSAC Algorithm," Image Rochester NY, vol. 4, no.
    1, pp. 2-3, 2010.

    [55] Y. Taguchi, Y.-D. Jian, S. Ramalingam, and C. Feng, "Point-plane SLAM for hand-held
    3D sensors," in ICRA, 2013, pp. 5182-5189.

    [56] A. Prusak, O. Melnychuk, H. Roth, I. Schiller, and R. Koch, "Pose estimation and map
    building with a time-of-flight-camera for robot navigation," International Journal of
    Intelligent Systems Technologies and Applications, vol. 5, no. 3-4, pp. 355-364, 2008.

    [57] J. Sprickerhof, A. Nüchter, K. Lingemann, and J. Hertzberg, "An Explicit Loop Closing Technique for 6D SLAM," in ECMR, 2009, pp. 229-234.

    [58] S. Rusinkiewicz and M. Levoy, "Efficient variants of the ICP algorithm," in 3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on, 2001, pp.145-152: IEEE.

    [59] R. T. Chin and C. A. Harlow, "Automated visual inspection: A survey," IEEE
    transactions on pattern analysis and machine intelligence, no. 6, pp. 557-573, 1982.

    [60] O. Hecht and G. Dishon, "Automatic optical inspection (AOI)," in Electronic
    Components and Technology Conference, 1990.., 40th, 1990, pp. 659-661: IEEE.

    [61] K. Fan and C. Hsu, "Strategic planning of developing automatic optical inspection (AOI) technologies in Taiwan," in Journal of Physics: Conference Series, 2005, vol. 13, no. 1, p. 394: IOP Publishing.

    [62] F. Blais, "Review of 20 years of range sensor development," Journal of electronic
    imaging, vol. 13, no. 1, pp. 231-244, 2004.

    [63] D. Acosta, O. García, and J. Aponte, "Laser triangulation for shape acquisition in a 3D scanner plus scan," in Electronics, Robotics and Automotive Mechanics Conference, 2006,2006, vol. 2, pp. 14-19: IEEE.

    [64] L. Bornaz and F. Rinaudo, "Terrestrial laser scanner data processing," in XXth ISPRS
    Congress Istanbul, 2004.

    [65] S. Son, H. Park, and K. H. Lee, "Automated laser scanning system for reverse
    engineering and inspection," International Journal of Machine Tools and Manufacture,
    vol. 42, no. 8, pp. 889-897, 2002.

    [66] F. J. Brosed, J. J. Aguilar, D. Guillomía, and J. Santolaria, "3D geometrical inspection of complex geometry parts using a novel laser triangulation sensor and a robot," Sensors, vol. 11, no. 1, pp. 90-110, 2010.

    [67] C. D. Hansen and C. R. Johnson, Visualization handbook. Elsevier, 2011.

    [68] R. A. Drebin, L. Carpenter, and P. Hanrahan, "Volume rendering," in ACM Siggraph
    Computer Graphics, 1988, vol. 22, no. 4, pp. 65-74: ACM.

    [69] J. Leigh, P. J. Rajlich, R. J. Stein, A. E. Johnson, and T. A. DeFanti, "LIMBO/VTK: A tool for rapid tele-immersive visualization," in CDROM proc. of IEEE Visualizaton'98,
    1998, pp. 18-23.

    [70] P. Ramachandran and G. Varoquaux, "Mayavi: 3D visualization of scientific data,"
    Computing in Science & Engineering, vol. 13, no. 2, pp. 40-51, 2011.

    [71] G. Bradski, A. Kaehler, and V. Pisarevsky, "Learning-based computer vision with intel's open source computer vision library," Intel technology journal, vol. 9, no. 2, 2005.

    [72] G. Bradski and A. Kaehler, Learning OpenCV: Computer vision with the OpenCV library.
    " O'Reilly Media, Inc.", 2008.

    [73] T. Horprasert, D. Harwood, and L. S. Davis, "A statistical approach for real-time robust background subtraction and shadow detection," in Ieee iccv, 1999, vol. 99, no. 1999, pp. 1-19: Citeseer.

    [74] S. Sural, G. Qian, and S. Pramanik, "Segmentation and histogram generation using the
    HSV color space for image retrieval," in Image Processing. 2002. Proceedings. 2002
    International Conference on, 2002, vol. 2, pp. II-II: IEEE.

    [75] D. Wagner, T. Langlotz, and D. Schmalstieg, "Robust and unobtrusive marker tracking
    on mobile phones," in Proceedings of the 7th IEEE/ACM International Symposium on
    Mixed and Augmented Reality, 2008, pp. 121-124: IEEE Computer Society.

    [76] H. Chunping, T. Junlin, H. Yonghong, and L. Wei, "A two-view stereoscopic video
    demonstration system based on industrial camera," in Pervasive Computing and
    Applications (ICPCA), 2011 6th International Conference on, 2011, pp. 95-98: IEEE.

    [77] C. Krishnan, E. P. Washabaugh, and Y. Seetharaman, "A low cost real-time motion
    tracking approach using webcam technology," Journal of biomechanics, vol. 48, no. 3, pp.
    544-548, 2015.

    [78] R. B. Rusu and S. Cousins, "3d is here: Point cloud library (pcl)," in Robotics and
    automation (ICRA), 2011 IEEE International Conference on, 2011, pp. 1-4: IEEE.
    [79] A. Aldoma et al., "Tutorial: Point cloud library: Three-dimensional object recognition and 6 dof pose estimation," IEEE Robotics & Automation Magazine, vol. 19, no. 3, pp. 80-91, 2012.

    [80] A. Angeli, D. Filliat, S. Doncieux, and J.-A. Meyer, "Fast and incremental method for loop-closure detection using bags of visual words," IEEE Transactions on Robotics, vol.24, no. 5, pp. 1027-1037, 2008.

    [81] L. L. Howell and A. Midha, "A loop-closure theory for the analysis and synthesis of
    compliant mechanisms," Journal of mechanical design, vol. 118, no. 1, pp. 121-125,
    1996.

    [82] W. J. Schroeder, B. Lorensen, and K. Martin, The visualization toolkit: an objectoriented approach to 3D graphics. Kitware, 2004.

    [83] B. Long, "Robot," ed: Google Patents, 2011.

    QR CODE