簡易檢索 / 詳目顯示

研究生: 陳易承
Yi-Cheng Chen
論文名稱: 立體視覺三維邊緣偵測應用於空間物件抽出
Stereoscopic Vision Based 3D Edge Detection for Spatial Object Extraction Applications
指導教授: 郭重顯
Chung-Hsien Kuo
口試委員: 宋開泰
Kai-Tai Song
徐繼聖
Gee-Sern Hsu
蘇順豐
Shun-Feng Su
林其禹
Chyi-Yeu Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 69
中文關鍵詞: 立體視覺相機校正與參數檢定自適應閥值前後景分離邊緣檢測
外文關鍵詞: stereo vision, camera calibration and parameter verification, adaptive threshold foreground/background separation, edge detection
相關次數: 點閱:379下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文提出結合深度學習物件分類與邊緣檢測之物件三維邊緣抽出技術。一般在影像中立體成像以影像區塊匹配(Block Matching),或單一特徵點匹配為主要處理方式,然而在建立深度地圖後,若想針對影像內特定物體進行分析,以前述之常見方式僅能獲得深度值等資訊,僅以深度值來分析物體相關特徵會變得較為困難。因此本論文結合深度影像模組及邊緣抽出和立體成像模型來找出特定物件之位置並予以分析,其中主要分成三個部分,其一為深度學習對於物件之辨識,藉由深度學習將欲辨識物體進行分類,並獲得物件之感興趣區域(Region of Interest),我們可以透過圖像分割演算法對於感興趣區域內影像進行前後景分離,以獲得物件之影像;第二為邊緣抽出演算法,我們可以根據邊緣檢測演算子對於物件進行邊緣的檢測,並根據邊緣物件之特性進行分群,已獲得匹配後之邊緣對應關係;第三為立體相機校正模型,利用立體成像公式及多點像素點資料建構出參數方程式,逆向推導相機之內部參數,進行相機參數檢定與校正。透過校正後相機模型可得到二維邊緣對三維空間之轉換關係。最終藉由先前深度學習分類結果將物體依照各自類別並結合邊緣抽出之資料進行三維資料濾除與重要特徵之重建。最後本系統將輸出特徵資訊,供已開發之機械手臂對於物體進行互動,以驗證準確性。本研究未來可作為個人化移動載具之自動化影像處理,藉由其影像資訊來評估移動載具上之機械手臂是否抓取。


This study proposes an edge extracting technique combining with deep learning and edge detection approaches. Generally, block matching or single feature point matching are used for stereo vision. However, it is difficult to analyze specific objects and to obtain features of specific objects in the image by using those approaches only. To overcome this problem, an edge extracting technique combining with deep learning for stereo vision model was proposed. In this study, the deep learning was used to classify specific objects and to obtain the region of interest of objects in an image frame. Then background subtraction with adaptive threshold was used to extract the object of interest from the image. An edge detector is used to obtain the edge for the object. These edges were clustered based on their characteristics. This edge information is obtained for images from the left and the right cameras. Then a camera calibration approach was used to obtain parameters of both cameras using stereo vision formula. Subsequently the 2D pixel points information of the object was converted to 3D coordinates corresponding to that object. Finally, with the classification result from deep learning approach, this study could filter out noise and reconstruct important object features.
Moreover, a robotic arm was used to evaluate this edge extraction approach. The 3D coordinates of the object of interest obtained from this system was given as the input to the universal type robotic arm. Based on these coordinates, the inverse kinematics for the robotic arm is computed to grasp the object.

指導教授推薦書 i 口試委員會審定書 ii 授權書 iii 誌謝 iv 摘要 v Abstract vi 目錄 vii 表目錄 x 圖目錄 xi 符號說明 xiii 第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 3 1.3 文獻回顧 4 1.3.1 立體視覺與重建相關研究 4 1.3.2 邊緣檢測相關研究 6 1.3.3 相機校正相關研究 7 1.4 論文架構 9 第二章 研究方法 10 2.1 影像系統架構 10 2.1.1 影像感測器 10 2.1.2 系統架構 10 2.2 深度學習感興趣區域 11 2.3 影像前處理 12 2.3.1 前後景物體分離 12 2.3.2 Sobel邊緣檢測 14 2.3.3 物件連通標記法 14 2.4 邊緣匹配 16 2.4.1 影像形態學 16 2.4.2 邊緣細線化 16 2.4.3 邊緣分群 17 2.4.4 變化極大值濾除 19 2.4.5 邊緣曲線分析 20 2.4.6 對應邊緣內插法 22 2.4.7 邊緣像素曲線擬合 24 第三章 三維邊緣抽出與相機參數檢定 27 3.1 三維影像邊緣處理流程 27 3.2 立體成像模型 28 3.3 物件類別雜訊濾除 29 3.3.1 三維邊緣長度濾除 29 3.3.2 邊緣鋸齒平滑化 30 3.4 三維物體邊緣重建 31 3.4.1 杯子物體重建 31 3.4.2 方盒物體重建 34 3.5 被重建物體影像特徵 34 3.5.1 物體高度 35 3.5.2 夾取中心點 35 3.5.3 物體寬度 35 第四章 實驗結果與分析 36 4.1 相機參數檢定後影像結果比較 36 4.2 ZED深度與邊緣抽出深度比較 40 4.3 抽出物體特徵 42 第五章 結論與未來研究方向 48 5.1 結論 48 5.2 未來研究方向 48 參考文獻 49

[1] D. Chung, S. Hong and J. Kim, “Underwater pose estimation relative to planar hull surface using stereo vision”, IEEE Underwater Technology (UT), pp. 1-4, 2017.
[2] M. Liu, C. Shan, H. Zhang and Q. Xia, “Stereo vision based road free space detection”, 9th International Symposium on Computational Intelligence and Design (ISCID), vol. 2, pp. 272-276, 2016.
[3] S. B. Mane and S. Vhanale, “Real time obstacle detection for mobile robot navigation using stereo vision”, International Conference on Computing, Analytics and Security Trends (CAST), pp. 637-642, 2016.
[4] M. Yuda, Z. Xiangjun, S. Weiming and L. Shaofeng, “Target accurate positioning based on the point cloud created by stereo vision”, 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP), pp. 1-5, 2016.
[5] P. Ondrúška, P. Kohli and S. Izadi, “MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones”, IEEE Transactions on Visualization and Computer Graphics, vol. 21, no. 11, pp. 1251-1258, 2015.
[6] Y. M. Mustafah, R. Noor, H. Hasbi and A. W. Azma, “Stereo vision images processing for real-time object distance and size measurements”, International Conference on Computer and Communication Engineering (ICCCE), pp. 659-663, 2012.
[7] W. Wang, J. Ou, J. Zhang and J. Wan, “Edge detection for polarimetric SAR images using span-driven adaptive filter”, IEEE Radar Conference (RadarConf), pp. 1099-1102, 2017.
[8] A. Kalra and R. L. Chhokar, “A hybrid approach using sobel and canny operator for digital image edge detection”, International Conference on Micro-Electronics and Telecommunication Engineering (ICMETE), pp. 305-310, 2016.
[9] N. Madrid, C. Lopez-Molina and B. D. Baets, “Generalized antisymmetric filters for edge detection”, International Conference on Soft Computing and Pattern Recognition (SoCPaR), pp. 25-30, 2013.
[10] J. Hao and T. Shibata, “A speed adaptive ego-motion detection system using edge-histograms produced by variable graduation method”, 15th European Signal Processing Conference, pp. 1590-1594, 2007.
[11] T. Sirin, M. I. Saglam, I. Erer, M. Gokmen and O. Ersoy, “A comparative evaluation of competitive learning algorithms for edge detection enhancement”, 13th European Signal Processing Conference, pp. 1-4, 2005.
[12] F. Jin and X. Wang, “An autonomous camera calibration system based on the theory of minimum convex hull”, Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), 2015, pp. 857-860, 2015.
[13] F. Pirahansiah, S. N. H. S. Abdullah and S. Sahran, “Camera calibration for multi-modal robot vision based on image quality assessment”, 10th Asian Control Conference (ASCC), pp. 1-6, 2015.
[14] L. Song, W. Wu, J. Guo and X. Li, “Survey on camera calibration technique”, 5th International Conference on Intelligent Human-Machine Systems and Cybernetics, vol. 2, pp. 389-392, 2013.
[15] A. Fetić, D. Jurić and D. Osmanković, “The procedure of a camera calibration using camera calibration toolbox for MATLAB”, Proceedings of the 35th International Convention MIPRO, 2012, pp. 1752-1757 , 2012.
[16] A. Kapadia, D. Braganza, D. M. Dawson and M. L. McIntyre, “Adaptive camera calibration with measurable position of fixed features”, American Control Conference, pp. 3869-3874, 2008.
[24] 蔡裕霖,「電腦視覺之三度空間定位系統開發」,碩士論文,私立長庚大學
,民國92年
[25] 胡家瑜,「結合機器人運動學與相機成像模型之即時影像處理技術」,碩士
論文,國立台灣科技大學,民國105年
[26] 沈予平,「基於擴展型卡爾曼濾波器之中型雙足人形機器人足球影像追蹤
定位」,國立台灣科技大學,民國104年

無法下載圖示 全文公開日期 2022/08/16 (校內網路)
全文公開日期 本全文未授權公開 (校外網路)
全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
QR CODE