簡易檢索 / 詳目顯示

研究生: Joel Vidal Verdaguer
Joel Vidal Verdaguer
論文名稱: 雜亂場景中部分遮擋三維物體的辨識和六維姿態估計
Recognition and 6D Pose Estimation of Partially Occluded 3D Objects in Cluttered Scenes
指導教授: 林其禹
Chyi-Yeu Lin
口試委員: 徐繼聖
Gee-Sern Jison Hsu
郭重顯
Chung-Hsien Kuo
林顯易
Hsien-I Lin
王文俊
Wen-June Wang
林其禹
Chyi-Yeu Lin
學位類別: 博士
Doctor
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 99
中文關鍵詞: 3D Object Recognition6D Pose Estimation3D Computer VisionScene UnderstandingPoint Pair Features
外文關鍵詞: 3D Object Recognition, 6D Pose Estimation, 3D Computer Vision, Scene Understanding, Point Pair Features
相關次數: 點閱:251下載:26
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 物件辨識和姿態估計對場景理解和高效,靈活和可靠的自主系統而言是一項關鍵任務。傳統上,物件辨識的大多數研究工作都集中對2D影像中物件,使用包括雜波、遮擋和不同的照明場景的內容進行檢測和分類。使用機器學習方法進行物件辨識,儘管具有很高的強健性,但這些方法都是從2D視角來面對問題,而不是在3D空間中考慮物件的精確旋轉方位和位置。在此背景下,根據3D場景數據的方法作為第一種可針對不同的複雜場景穩健地解決6D姿態估算問題的解答方案,顯示出迄今為止最可期待的性能水平和最佳的結果。然而,問題尚未解決,高度混亂的場景和遮擋仍然是這最先進方法的具體挑戰。本文提出並分析了基於表現最佳的點對特徵投票(point pairs features voting)方法的創新解決方案,以定義一種創新的以特徵為基的方法,用於在雜亂場景下對部分遮擋物件進行強健辨識和6D姿態估計。
    此研究考慮了當前方法的缺點後,定義出新的判別預處理方法、改進的匹配方法、更強健的群聚和一些跟視角相關的後處理步驟。針對具有挑戰性的物件阻擋場景,本研究還提出了一種基於自上而下的視覺注意力和色彩提示的創新解決方法,以提高在物件僅部分可被看見案例之性能。本論文所提出的方法的性能是和14種目前最先進的方法,在一個最全面且公開可得,具雜亂和遮擋的真實場景之基準下一起進行評估。結果顯示,本文提出的方法在所有資料庫上的性能均明顯優於所有一起測試的最先進解決方案。本方法在不同類型的物件和場景皆展示了有效性,特別是在相對低的物件可見情況下提高了性能,擴展了當前6D姿勢估計方法的能量。最後,通過建構和測試一種用於智慧製造的創新自動離線編程解決方案,證明了本研究的實用價值。具體地說,建構了一套自動機器手臂整合系統,用以充分發掘物件辨識和姿態估計技術的強健性和進步性。物件辨識方法先將工件姿態資訊提供給靈活的離線編程平台,以全自主方式有效地解決在製造場景中機器手臂整合的關鍵問題。本系統在真實場景的一系列實驗中進行測試,並與不同的現有解決方案進行比較,顯示出本方法的穩健性和優勢。總體而言,該整合系統顯示了本文提出之最尖端物體辨識方法的價值和潛力,定義出針對高度先進全自主系統的創新智慧解決方案。


    Object recognition and pose estimation is a crucial task towards scene understanding and highly efficient, flexible and reliable autonomous systems. Traditionally, most research efforts in object recognition have been focused on the detection and classification of objects in two-dimensional images, including clutter, occlusion and different illumination scenarios. Despite reaching a high level of robustness, specially using machine learning approaches, these methods face the problem from a 2D point of view rather than providing the precise rotation and position of the objects in the 3D space. In this context, methods based on three-dimensional scene data appeared as the first solutions to robustly solve this 6D pose estimation problem for different complex scenarios, showing a promising level of performance with the best results so far. However, the problem has not yet been solved, with highly cluttered scenes and occlusions still remaining challenging cases for state-of-the-art methods. This thesis proposes and analyses novel solutions based on the top performing Point Pair Features voting approach to define a novel feature-based method for robust recognition and 6D pose estimation of partially occluded objects in cluttered scenarios.
    The research considers the drawbacks of current approaches to define a novel discriminative preprocessing solution, an improved matching method, a more robust clustering and several view-dependent postprocessing steps. Focusing on the challenging occluded cases, the research also proposes a innovative solution based on top-down visual attention and color cues to boost performance in partially visible cases. The performance of the proposed method is evaluated against 14 state-of-the-art solutions on a comprehensive publicly available benchmark with real-world scenarios under clutter and occlusion. The results shows an outstanding improvement for all datasets outperforming all tested state-of-the-art solutions. The validity of the proposed approach is shown for different types of objects and scenarios, specially boosting performance for relatively low visible cases extending the capacities of current 6D pose estimation methods. Finally, the practical value of the research is demonstrated by defining and testing a novel automatic offline programming solution for intelligent manufacturing. Specifically, an automatic robot integration system that exploits the robustness and benefits of the recognition and pose estimation is proposed. The recognition method provides the workpiece pose information to a flexible offline programming platform, efficiently solving, in an autonomous way, a critical problem for robot integration in manufacturing scenarios. The system is tested on a series of experiments on real-world scenarios and compared against different existing solutions, showing the robustness and benefits of the method. Overall, the system shows the value and potential of a cutting edge object recognition method to define innovative intelligent solutions towards highly advanced autonomous systems.

    摘要 ........................................................ i Abstract .................................................... ii Acknowledgements ............................................ iv List of Figures ............................................. vii List of Tables .............................................. ix 1 Introduction .............................................. 1 1.1 Background and Motivation ............................... 1 1.2 Visual Object Recognition ............................... 2 1.3 Objectives and Scope of Study ........................... 6 1.4 Structure of the Thesis ................................. 7 1.5 Publications ............................................ 7 2 The Point Pair Features Voting Approach ................... 9 2.1 Introduction ............................................ 9 2.2 The Basics .............................................. 9 2.3 Related Methods ......................................... 14 3 A Novel Approach Based on Point Pair Features ............. 16 3.1 Method’s Overview ....................................... 16 3.2 Preprocessing ........................................... 17 3.2.1 Normal Estimation ..................................... 17 3.2.2 Downsampling .......................................... 18 3.3 Feature Extraction ...................................... 19 3.4 Matching ................................................ 20 3.5 Hypothesis Generation ................................... 22 3.6 Clustering .............................................. 23 3.7 Postprocessing .......................................... 23 3.7.1 Rescoring and Refining ................................ 24 3.7.2 Hypothesis Verification ............................... 25 4 Facing Occlusion with Visual Attention and Color .......... 27 4.1 The Occlusion Problem ................................... 27 4.2 Visual Attention ........................................ 28 4.3 Color Cues to Improve Matching .......................... 29 4.4 A Novel Solution for Occlusion .......................... 29 4.4.1 Attention-Based Matching Using Color Cues ............. 30 4.4.2 Color Weighted Matching ............................... 32 4.4.3 Color Models and Distance Metrics ..................... 34 5 Evaluation and Results: Analysis on a Comprehensive Estimation Benchmark ...................................... 36 5.1 The BOP Pose Estimation Benchmark ....................... 36 5.2 Method’s Step and Parameter Analysis .................... 37 5.2.1 Normal Clustering, Matching and Rendered View ......... 37 5.2.2 Rescoring and ICP ..................................... 39 5.2.3 Alpha Value for Different Color Spaces ................ 40 5.2.4 Omega Weight Factor ................................... 41 5.3 Performance Evaluation Using Depth ...................... 43 5.4 Performance Evaluation Using Depth and Color ............ 47 6 Case Study: Automatic Robot Path Integration with Offline Programming and Range Data ................................ 52 6.1 Introduction ............................................ 52 6.2 System Overview ......................................... 56 6.2.1 Kinect Sensor ......................................... 57 6.2.2 Off-line Programming Platform ......................... 58 6.3 AOLP Integration ........................................ 58 6.3.1 Object Recognition .................................... 59 6.3.2 Workpiece Transformation .............................. 60 6.3.3 Path generation by OLP ................................ 61 6.4 Experimental Results .................................... 62 6.4.1 Evaluation of the System Error ........................ 63 6.4.2 System Robustness Analysis ............................ 67 6.4.3 Comparison and Discussion ............................. 70 7 Conclusions ............................................... 73 Bibliography ................................................ 75

    [1] MVTec HALCON. https://www.mvtec.com/halcon/. Accessed: 2018-06-07.
    [2] ROS-Industrial. https://rosindustrial.org/. Accessed: 2018-09-10.
    [3] OPEN CASCADE. www.opencascade.com, 2017. Accessed 02-12-2018.
    [4] Sett A. and K Vollmann. Computer based robot training in a virtualenvironment. In IEEE International Conference on Industrial Tech-
    nology, 2002. IEEE ICIT ’02, volume 2, pages 1185–1189, Dec. 2002.
    [5] A. Aldoma, M. Vincze, N. Blodow, D. Gossow, S. Gedikli, R. B. Rusu, and G. Bradski. Cad-model recognition and 6dof pose estimation using 3d cues. In 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pages 585–592, Nov 2011.
    [6] A. L. Ames, E. M. Hinman-Sweeney, and J. M. Sizemore. Automated generation of weld path trajectories. In (ISATP 2005). The 6th IEEE International Symposium on Assembly and Task Planning: From Nano to Macro Assembly and Manufacturing, 2005., pages 182–187, July 2005.
    [7] Alexander Andreopoulos and John K. Tsotsos. 50 years of object recognition: Directions forward. Computer Vision and Image Understanding, 117(8):827 – 891, 2013.
    [8] Farshid Arman and Jake K Aggarwal. Model-based object recognition in dense-range images—a review. ACM Computing Surveys (CSUR), 25(1):5–43, 1993.
    [9] Khelifa Baizid, Sasa Cukovic, Jamshed Iqbal, Ali Yousnadj, Ryad Chellali, Amal Meddahi, Goran Devedzic, and Ionut Ghionea. IRoSim: Industrial robotics simulation design planning and optimization platform based on cad and knowledgeware technologies. Robotics and Computer-Integrated Manufacturing, 42:121 – 134, 2016.
    [10] A. K. Bedaka and C. Y. Lin. Autonomous path generation platform for robot simulation. In 2017 International Conference on Advanced Robotics and Intelligent Systems (ARIS), pages 63–68, Sept 2017.
    [11] A K Bedaka and C-Y Lin. Cad-based robot path planning and simulation using open cascade.Procedia Computer Science, 133:779–785, July 2018.
    [12] Amit Kumar Bedaka, Alaa M. Mahmoud, Shao-Chun Lee, and Chyi-Yeu Lin. Autonomous robot-guided inspection system based on offline programming and rgb-d model. Sensors, 18(11), Nov 2018.
    [13] Michael Beetz, Ulrich Klank, Ingo Kresse, Alexis Maldonado, Lorenz Mösenlechner, Dejan Pangercic, Thomas Rühr, and Moritz Tenorth. Robotic roommates making pancakes. In Humanoid Robots (Humanoids), 2011 11th IEEE-RAS International Conference on, pages 529–536. IEEE, 2011.
    [14] Mohammed Bennamoun and George J Mamic. Introduction. In Object Recognition, pages 3–28. Springer, 2002.
    [15] Paul J Besl. The free-form surface matching problem. In Machinevision for three-dimensional scenes, pages 25–71. Elsevier, 1990.
    [16] Paul J. Besl and Ramesh C. Jain. Three-dimensional object recognition. ACM Comput. Surv., 17(1):75–145, March 1985.
    [17] Paul J Besl and Neil D McKay. Method for registration of 3-d shapes. In Sensor Fusion IV: Control Paradigms and Data Structures, volume 1611, pages 586–607. International Society for Optics and Photonics, 1992.
    [18] T. Birdal and S. Ilic. Point pair features based object detection and pose estimation revisited. In 2015 International Conference on 3D Vision, pages 527–535, Oct 2015.
    [19] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer-Verlag New York, 2006.
    [20] Jonathan Blackledge and Dmitryi Dubovitskiy. Object detection and classification with applications to skin cancer screening. ISAST Transactions on Intelligent Systems, 1:34–45, 2008.
    [21] E. Brachmann, F. Michel, A. Krull, M. Y. Yang, S. Gumhold, and C. Rother. Uncertainty-driven 6d pose estimation of objects and scenes from a single rgb image. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3364–3372, June 2016.
    [22] Eric Brachmann, Alexander Krull, Frank Michel, Stefan Gumhold, Jamie Shotton, and Carsten Rother. Learning 6d object pose estimation using 3d object coordinates. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision – ECCV 2014, pages 536–551, Cham, 2014. Springer International Publishing.
    [23] Inês Bramão, Alexandra Reis, Karl Magnus Petersson, and Luís Faísca. The role of color information on object recognition: A review and meta-analysis. Acta Psychologica, 138(1):244 – 253, 2011.
    [24] Rodney A. Brooks. Symbolic reasoning among 3-d models and 2-dimages. Artificial Intelligence, 17(1):285 – 348, 1981.
    [25] A. G. Buch, L. Kiforenko, and D. Kraft. Rotational subgroup voting and pose clustering for robust 3d object recognition. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 4137–4145, Oct 2017.
    [26] Anders G. Buch, Henrik G. Petersen, and Norbert Krüger. Local shape feature fusion for improved matching, pose estimation and 3d object recognition. SpringerPlus, 5(1):297, Mar 2016.
    [27] Fabrizio Caccavale and Masaru Uchiyama. Cooperative Manipulators, pages 701–718. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.
    [28] L. Caruso, R. Russo, and S. Savino. Microsoft kinect v2 vision system in a manufacturing application. Robotics and Computer-Integrated Manufacturing, 48:174 – 181, 2017.
    [29] Yang Chen and Gérard Medioni. Object modelling by registration of multiple range images. Image and vision computing, 10(3):145–155, 1992.
    [30] C. Choi and H. I. Christensen. 3d pose estimation of daily objects using an rgb-d camera. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3342–3349, Oct 2012.
    [31] Changhyun Choi and Henrik I. Christensen. Rgb-d object pose estimation in unstructured environments. Robotics and Autonomous Systems, 75:595 – 613, 2016.
    [32] N. Correll, K. E. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser, K. Okada, A. Rodriguez, J. M. Romano, and P. R. Wurman. Analysis and observations from the first amazon picking challenge. IEEE Transactions on Automation Science and Engineering, 15(1):172–188, Jan 2018.
    [33] Jian S. Dai. Euler–rodrigues formula variations, quaternion conjugation and intrinsic connections. Mechanism and Machine Theory, 92:144 – 152, 2015.
    [34] Konstantinos Daniilidis. Hand-eye calibration using dual quaternions. The International Journal of Robotics Research, 18(3):286–298, 1999.
    [35] G. N. Desouza and A. C. Kak. Vision for mobile robot navigation: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(2):237–267, Feb 2002.
    [36] B. Drost and S. Ilic. 3d object detection and localization using multi-modal point pair features. In 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization Transmission, pages 9–16, Oct 2012.
    [37] B. Drost, M. Ulrich, N. Navab, and S. Ilic. Model globally, match locally: Efficient and robust 3d object recognition. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 998–1005, June 2010.
    [38] Clemens Eppner, Sebastian Höfer, Rico Jonschkowski, Roberto Martín- Martín, Arne Sieverling, Vincent Wall, and Oliver Brock. Lessons from the amazon picking challenge: Four aspects of building robotic systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, pages 4831–4835. AAAI Press, 2017.
    [39] Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, Jan 2015.
    [40] Mark D Fairchild. Color appearance models. John Wiley & Sons, 2013.
    [41] Jason Geng. Structured-light 3d surface imaging: atutorial. Adv. Opt. Photon., 3(2):128–160, Jun 2011.
    [42] Y. Guo, M. Bennamoun, F. Sohel, M. Lu, and J. Wan. 3d object recognition in cluttered scenes with local surface features: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(11):2270–2287, Nov 2014.
    [43] S. Hinterstoisser, C. Cagniart, S. Ilic, P. Sturm, N. Navab, P. Fua, and V. Lepetit. Gradient response maps for real-time detection of textureless objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(5):876–888, May 2012.
    [44] S. Hinterstoisser, S. Holzer, C. Cagniart, S. Ilic, K. Konolige, N. Navab, and V. Lepetit. Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes. In 2011 International Conference on Computer Vision, pages 858–865, Nov 2011.
    [45] S. Hinterstoisser, V. Lepetit, S. Ilic, P. Fua, and N. Navab. Dominant orientation templates for real-time detection of texture-less objects. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2257–2264, June 2010.
    [46] Stefan Hinterstoisser, Vincent Lepetit, Slobodan Ilic, Stefan Holzer, Gary Bradski, Kurt Konolige, and Nassir Navab. Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In Kyoung Mu Lee, Yasuyuki Matsushita, James M.Rehg, and Zhanyi Hu, editors, Computer Vision – ACCV 2012, pages 548–562, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.
    [47] Stefan Hinterstoisser, Vincent Lepetit, Naresh Rajkumar, and Kurt Konolige. Going further with point pair features. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – ECCV 2016, pages 834–848, Cham, 2016. Springer International Publishing.
    [48] Tomáš Hodaň, Jiří Matas, and Štěpán Obdržálek. On evaluation of 6d object pose estimation. In Gang Hua and Hervé Jégou, editors, Computer Vision – ECCV 2016 Workshops, pages 606–619, Cham, 2016. Springer International Publishing.
    [49] Tomas Hodan, Frank Michel, Eric Brachmann, Wadim Kehl, Anders GlentBuch, Dirk Kraft, Bertram Drost, Joel Vidal, Stephan Ihrke, Xenophon Zabulis, Caner Sahin, Fabian Manhardt, Federico Tombari, Tae-Kyun Kim, Jiri Matas, and Carsten Rother. Bop: Benchmark for 6d object pose estimation. In The European Conference on Computer Vision (ECCV), September 2018.
    [50] Tomáš Hodaň, Xenophon Zabulis, Manolis Lourakis, Štěpán Obdržálek, and Jiří Matas. Detection and fine 3d pose estimation of texture-less objects in rgb-d images. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4421–4428, Sept 2015.
    [51] Radu Horaud and Fadi Dornaika. Hand-eye calibration. The International Journal of Robotics Research, 14(3):195–210, 1995.
    [52] B. K. P. Horn. Extended gaussian images. Proceedings of the IEEE, 72(12):1671–1686, Dec 1984.
    [53] Du Q. Huynh. Metrics for 3d rotations: Comparison and analysis. Journal of Mathematical Imaging and Vision, 35(2):155–164, Oct 2009.
    [54] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254–1259, Nov 1998.
    [55] Anil K Jain and Chitra Dorai. 3d object recognition: Representation and matching. Statistics and Computing, 10(2):167–182, 2000.
    [56] L. Jing, J. Fengshui, and L. En. Rgb-d sensor-based auto path generation method for arc welding robot. In 2016 Chinese Control and Decision Conference (CCDC), pages 4390–4395, May 2016.
    [57] W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab. Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 1530–1538, Oct 2017.
    [58] Wadim Kehl, Fausto Milletari, Federico Tombari, Slobodan Ilic, and Nassir Navab. Deep learning of local rgb-d patches for 3dobject detection and 6d pose estimation. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – ECCV 2016, pages 205–220, Cham, 2016. Springer International Publishing.
    [59] Lilita Kiforenko, Bertram Drost, Federico Tombari, Norbert Krüger, and Anders Glent Buch. A performance evaluation of point pair features. Computer Vision and Image Understanding, 166:66–80, 2018.
    [60] E. Kim and G. Medioni. 3d object recognition in range images using visibility context. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3800–3807, Sept 2011.
    [61] Georg A Klein and Todd Meyrath. Industrial color physics, volume 154. Springer, 2010.
    [62] S. B. Kotsiantis. Supervised machine learning: A review of classification techniques. In Proceedings of the 2007 Conference on Emerging Artificial Intelligence Applications in Computer Engineering: Real Word AI Systems with Applications in eHealth, HCI, Information Retrieval and Pervasive Technologies, pages 3–24, Amsterdam, The Netherlands, The Netherlands, 2007. IOS Press.
    [63] N. Larkin, Z. Pan, S. Van Duin, and J. Norrish. 3d mapping using a tof camera for self programming an industrial robot. In 2013 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pages 494–499, July 2013.
    [64] Nathan Larkin, Aleksandar Milojevic, Zengxi Pan, Joseph Polden, and John Norrish. Offline programming for short batch robotic welding. In 16th Joining of Materials (JOM) conference 2012, pages 1 – 6, 2011.
    [65] X. Li, L. Zhao, L. Wei, M. Yang, F. Wu, Y. Zhuang, H. Ling, and J. Wang. Deepsaliency: Multi-task deep neural network model for salient object detection. IEEE Transactions on Image Processing, 25(8):3919–3930, Aug 2016.
    [66] V. Lippiello, B. Siciliano, and L. Villani. Position-based visual servoing in industrial multirobot cells using a hybrid camera configuration. IEEE Transactions on Robotics, 23(1):73–86, Feb 2007.
    [67] Kok-Lim Low. Linear least-squares optimization for point-to- plane icp surface registration. Technical Report TR04-004, University of North Carolina, 2004.
    [68] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Com-
    puter Vision, volume 2, pages 1150–1157 vol.2, Sept 1999.
    [69] David G. Lowe. Three-dimensional object recognition from single two-dimensional images. Artificial Intelligence, 31(3):355 – 395, 1987.
    [70] Nemer Mahmoud and Konukseven E., Ilhan. Off-line nominal path generation of 6-dof robotic manipulator for edge finishing and inspection processes. The International Journal of Advanced Manufacturing Technology, pages 1–12, Dec 2016.
    [71] Perla Maiolino, Richard Woolley, David Branson, Panorios Benardos, Atanas Popov, and Svetan Ratchev. Flexible robot sealant dispensing cell using rgb-d sensor and off-line programming. Robotics and Computer-Integrated Manufacturing, 48:188 – 195, 2017.
    [72] Perla Maiolino, Richard A. J. Woolley, Atanas Popov, and Svetan Ratchev. Structural quality inspection based on a rgb-d sensor: Supporting manual-to-automated assembly operations. SAE International Journal of Materials and Manufacturing, 9(1):12–15, 2016.
    [73] Elias N Malamas, Euripides G.M Petrakis, Michalis Zervakis, Laurent Petit, and Jean-Didier Legat. A survey on industrial vision systems, applications and tools. Image and Vision Computing, 21(2):171 – 188, 2003.
    [74] R. McDonald and ed. Roderick. Colour Physics for Industry. Society of Dyers and Colourists, 1997.
    [75] Ajmal S. Mian, Mohammed Bennamoun, and Robyn Owens. Three-dimensional model-based object recognition and segmentation in cluttered scenes. IEEE Trans. Pattern Anal. Mach. Intell., 28(10):1584–1601, October 2006.
    [76] Ajmal S Mian, Mohammed Bennamoun, and Robyn A Owens. Automatic correspondence for 3d modeling: an extensive review. International Journal of Shape Modeling, 11(02):253–291, 2005.
    [77] S. Mitsi, K. D. Bouzakis, G. Mansour, D. Sagris, and G. Maliaris. Off-line programming of an industrial robot for manufacturing. The International Journal of Advanced Manufacturing Technology, 26(3):262–267, Sept 2004.
    [78] Pedro Neto and Nuno Mendes. Direct off-line robot programming via a common cad package. Robotics and Autonomous Systems, 61(8):896 – 910, 2013.
    [79] Pedro Neto, Nuno Mendes, Ricardo Araújo, J. Norberto Pires, and A. Paulo Moreira. High‐level robot programming based on cad: dealing with unpredictable environments. Industrial Robot: the international journal of robotics research and application , 39(3):294–303, 2012.
    [80] Ramakant Nevatia and Thomas O. Binford. Description and recognition of curved objects. Artificial Intelligence, 8(1):77 – 98, 1977.
    [81] Timothy S. Newman and Anil K. Jain. A survey of automated visual inspection. Computer Vision and Image Understanding, 61(2):231 – 262, 1995.
    [82] Z. Pan, J. Polden, N. Larkin, S. V. Duin, and J. Norrish. Recent progress on programming methods for industrial robots. In ISR 2010 (41st International Symposium on Robotics) and ROBOTIK 2010 (6th German Conference on Robotics), pages 1–8, June 2010.
    [83] Chavdar Papazov and Darius Burschka. An efficient ransac for 3d object recognition in noisy and occluded scenes. In Ron Kimmel, Reinhard Klette, and Akihiro Sugimoto, editors, Computer Vision – ACCV 2010, pages 135–148, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg.
    [84] Konstantinos N Plataniotis and Anastasios N Venetsanopoulos. Color image processing and applications. Springer Science & Business Media, 2013.
    [85] Joseph Polden, Zengxi Pan, Nathan Larkin, Stephen Van Duin, and John Norrish. Offline programming for a complex welding system using delmia automation. In Tzyh-Jong Tarn, Shan-Ben Chen, and Gu Fang, editors, Robotic Welding, Intelligence and Automation, pages 341–349, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg.
    [86] Ekaterina Potapova, Michael Zillich, and Markus Vincze. Survey of recent advances in 3d visual attention for robotics. The International Journal of Robotics Research, 36(11):1159–1176, 2017.
    [87] L. Qu, S. He, J. Zhang, J. Tian, Y. Tang, and Q. Yang. Rgbd salient object detection via deep fusion. IEEE Transactions on Image Processing, 26(5):2274–2285, May 2017.
    [88] Peter K. Radovan H., Daynier R. D. S. and Roman R. Offline programming of an abb robot using imported cad models in the robotstudio software environment. Applied Mechanics and Materials, 693,:62–67, Dec 2014.
    [89] Lawrence G Roberts. Machine perception of three-dimensional solids. PhD thesis, Massachusetts Institute of Technology, 1963.
    [90] Luís F. Rocha, Marcos Ferreira, V. Santos, and A. Paulo Moreira. Object recognition and pose estimation for industrial applications: A cascade system. Robotics and Computer-Integrated Manufacturing, 30(6):605 – 621, 2014.
    [91] S. Rusinkiewicz and M. Levoy. Efficient variants of the icp algorithm. In Proceedings Third International Conference on 3-D Digital Imaging and Modeling, pages 145–152, 2001. [92] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, Dec 2015.
    [93] R. B. Rusu, G. Bradski, R. Thibaux, and J. Hsu. Fast 3d recognition and pose using the viewpoint feature histogram. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2155–2162, Oct 2010.
    [94] R. B. Rusu and S. Cousins. 3d is here: Point cloud library (pcl). In 2011 IEEE International Conference on Robotics and Automation, pages 1–4, May 2011.
    [95] Giovanna Sansoni, Marco Trebeschi, and Franco Docchio. State-of-the-art and applications of 3d imaging sensors in industry, cultural heritage, medicine, and criminal investigation. Sensors, 9(1):568–601, 2009.
    [96] Hairol Nizam Mohd Shah, Marizan Sulaiman, Ahmad Zaki Shukor, and Zalina Kamis. An experiment of detection and localization in tooth saw shape for butt joint using kuka welding robot. The International Journal of Advanced Manufacturing Technology, 97(5):3153–3162, Jul 2018.
    [97] Linda G. Shapiro and George C. Stockman. 3d models and matching. In Computer Vision. Prentice Hall, Upper Saddle River, NJ, 2001.
    [98] Y. C. Shiu and S. Ahmad. Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form ax=xb. IEEE Transactions on Robotics and Automation, 5(1):16–29, Feb 1989.
    [99] Carsten Steger. Occlusion, clutter, and illumination invariant object recognition. International Archives of Photogrammetry and Remote Sensing, 34(3/A):345–350, 2002.
    [100] Yaoru Sun and Robert Fisher. Object-based visual attention for computer vision. Artificial Intelligence, 146(1):77 – 123, 2003.
    [101] Alykhan Tejani, Danhang Tang, Rigas Kouskouridas, and Tae-Kyun Kim. Latent-class hough forests for 3d object detection and pose estimation. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision – ECCV 2014, pages 462–477, Cham, 2014. Springer International Publishing.
    [102] Jan Theeuwes. Top–down and bottom–up control of visual selection. Acta Psychologica, 135(2):77 – 99, 2010.
    [103] F. Tombari, S. Salti, and L. Di Stefano. A combined texture-shape descriptor for enhanced 3d feature matching. In 2011 18th IEEE International Conference on Image Processing, pages 809–812, Sept 2011.
    [104] R. Y. Tsai and R. K. Lenz. A new technique for fully autonomous and efficient 3d robotics hand/eye calibration. IEEE Transactions on Robotics and Automation, 5(3):345–358, Jun 1989.
    [105] Shimon Ullman et al. High-level vision: Object recognition and visual cognition, volume 2. MIT press Cambridge, MA, 1996.
    [106] Markus Ulrich and Carsten Steger. Performance evaluation of 2d object recognition techniques. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences , 34(3/A):368–374, 2002.
    [107] K. van de Sande, T. Gevers, and C. Snoek. Evaluating color descriptors for object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1582–1596, Sept 2010.
    [108] Joel Vidal, Chyi-Yeu Lin, Xavier Lladó, and Robert Martí. A method for 6d pose estimation of free-form rigid objects using point pair features on range data. Sensors, 18(8), 2018.
    [109] Joel Vidal, Chyi-Yeu Lin, and Robert Martí. 6d pose estimation using an improved method based on point pair features. CoRR, abs/1802.08516, 2018.
    [110] Wei Wang, Lili Chen, Ziyuan Liu, Kolja Kühnlenz, and Darius Burschka. Textured/textureless object recognition and pose estimation
    using rgb-d image. Journal of Real-Time Image Processing, 10(4):667–682, Dec 2015.
    [111] W. Wohlkinger and M. Vincze. Ensemble of shape functions for 3d object classification. In 2011 IEEE International Conference on Robotics
    and Biomimetics, pages 2987–2992, Dec 2011.
    [112] Weidong Zhu, Weiwei Qu, Lianghong Cao, Di Yang, and Yinglin Ke. An off-line programming system for robotic drilling in aerospace manufacturing. The International Journal of Advanced Manufacturing Technology, 68(9):2535–2545, Oct 2013

    QR CODE