簡易檢索 / 詳目顯示

研究生: 蔡鈞涵
Chun-Han Tsai
論文名稱: 應用模板匹配技術於印刷電路板上標示點定位之開發與研究
Development and Research of Applying Template Matching Technique on Marking Point Positioning on PCB
指導教授: 郭中豐
Chung-Feng Jeffrey Kuo
口試委員: 張嘉德
Chia-Der Chang
黃昌群
Chang-Chiun Huang
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2013
畢業學年度: 101
語文別: 中文
論文頁數: 104
中文關鍵詞: 影像定位類神經網路特徵向量參數化模板向量匹配快速模板匹配三維曲面擬合
外文關鍵詞: Image registration, Neural network, Features vector, Parametric template vector matching, Fast template matching, 3-D surface fit
相關次數: 點閱:228下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 電子零組件安裝時的基板稱為印刷電路板(Printed circuit boards, PCB)。因此,印刷電路板的穩定性及品質會直接影響電子產品的可靠性與穩定性。印刷電路板上的積體電路封裝可藉由封裝材料保護內部晶片。積體電路封裝過程中包含固晶及打線兩個重要步驟。固晶定位的準確性會減少打線過程中金線偏移的狀況。
    傳統全域模板匹配,費時、低精度且無旋轉變化及尺度變化適應性。本研究提出之模板匹配技術針對時間、精確度及強健性方面進行改善,主要分為三部分:第一部分為數位影像前處理,為減少全域印刷電路板影像於定位時間的花費,因此,在PCB影像上進行影像處理並標籤各個影像區塊以得到標記影像。第二部分為特徵向量擷取與標示點區域影像選取,利用類神經網路,對PCB影像中經過標籤之標記影像,擷取擁有旋轉變化及尺度變化適應性之特徵向量,搭配影像矩(Image moments)進行訓練,完成PCB影像中標示點區域影像選取。第三部分為強健性模板匹配定位,在得到PCB標示點區域影像後,利用參數化模板向量匹配,對標示點區域影像進行尺度值估計,再以霍夫轉換計算標示點區域影像之偏轉角度,利用得到之尺度值及偏轉角度值,進行快速模板匹配找出標示點定位。並在標示點定位及其鄰近像素位置,進行三維曲面擬合,達到次像素等級精確度。
    實驗顯示,本研究之模板匹配技術,無論在有無雜訊或角度旋轉之PCB影像上,每張經過平移之影像其位置精確度誤差平均值均低於 m,誤差標準差也低於 m;旋轉角度估計上,霍夫轉換在整體之角度平均誤差及角度誤差標準差上均低於 ,相較於使用梯度方向碼(Orientation code, OC)方法,有更精確的表現;尺度值估計方面,其相對誤差平均值及誤差標準差值在有無雜訊影像上,都低於0.004及0.006,而且一張解析度為 之PCB影像完整定位時間平均只需0.55秒,優於傳統全域模板匹配的3.97秒,證明本研究之模板匹配技術,不僅能達到次像素等級的高精確度、低運算時間,更具備旋轉變化及尺度變化干擾的強健性,使其能快速、有效率且準確的得到定位。


    The substrate for electronic components installation is known as the printed circuit board (PCB). Accordingly, the stability and quality of PCB directly affect the reliability and stability of the electronic product. The integrated circuit packaging on the PCB can protect the internal chips. The integrated circuit packaging process includes two important steps, namely, die bonding and wire bonding. The accuracy of die bonding positioning can reduce the gold wire offset during the wire bonding process.
    The traditional global template matching is time-consuming, less accurate, and lack of rotation change and scale-change adaptability. The template matching technique proposed in this study has made improvement on the aspects of time, accuracy and robustness. It can be divided into three parts: 1) digital image pre-treatment: it can reduce the time spent on positioning the global PCB images, thus, the image processing is performed on the PCB images and each image block is marked with a volume label to obtain the tagged images; 2) the eigenvector capture and image selection of marking point area: it uses the neural network to image the marks identified with volume labels in the PCB image to capture the eigenvectors possessing rotation change and scale change adaptability, and carry out training with matching image moments to fulfill the marking point area image selection of the PCB image; 3) the robust template matching positioning: after obtaining the PCB marking point area images, the scale value of the marking point area image is estimated by the parametric template vector matching; then the deflection angle of the marking point area image is calculated with the Hough transform, in order to determine the marking point positioning through the fast template matching by utilizing the obtained scale value and deflection angle value. The three-dimensional curved surface fitting is performed at the location of the marking point positioning and its neighboring pixels to achieve sub-pixel level accuracy.
    The experimental results showed that by using the proposed template matching technique, the average position accuracy error of each image displaced horizontally is less than m, whether it is on the PCB image with or without noise or angular rotation and the error standard deviation is also less than m. In terms of the rotation angle estimation, the Hough transform in the overall average angle error and standard angle deviation is lower than , which presents a more accurate performance compared with the method using the gradient orientation code. In terms of the scale value estimation, the values of relative error average and standard error on the images with or without noise are lower than 0.004 and 0.006. Moreover, the complete positioning time of a PCB image with a resolution of only takes 0.55 sec on average, which is better than the 3.97 sec of the traditional global template matching. The results confirmed that the proposed template matching technique can achieve the sub-pixel level high accuracy and low computation time, and possess the robustness resisting the interference of the rotation changes and scale changes. The positioning could be obtained in a fast, efficient and accurate manner.

    摘要 I ABSTRACT III 致謝 V 目錄 VI 圖目錄 X 表目錄 XIII 第1章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 6 1.3 文獻回顧 8 1.3.1 特徵點導向方法匹配 8 1.3.2 區域導向方法匹配 10 1.4 論文架構 15 1.4.1 研究架構 16 第2章 數位影像前處理 19 2.1 金字塔多分辨率 20 2.2.1 高斯金字塔 20 2.2 影像閥值分割 23 2.3.1 Otsu方法 23 2.3 影像形態學處理 25 2.4.1 膨脹 25 2.4.2 侵蝕 26 2.4.3 斷開運算 26 2.4 聯通標記 27 第3章 特徵向量擷取與標示點區域影像選取 29 3.1 特徵向量擷取 30 3.2.1 圓環投影轉換 31 3.2.2 徑向反離散傅立葉轉換係數 34 3.2.3 徑向強度向量 40 3.2.4 徑向角度向量 42 3.2.5 圓環投影和特徵向量 44 3.2.6 特徵向量結合 46 3.2.7 Hu不變矩 47 3.2.8 Zernike不變矩 49 3.2 類神經網路 53 3.3.1 誤差倒傳遞學習法則 55 第4章 強健性模板匹配定位流程 59 4.1 參數化模板向量匹配 61 4.2 霍夫轉換 65 4.3 快速正規化模板匹配 67 4.4 次像素精度匹配 70 第5章 實驗結果與討論 72 5.1 實驗機台架構 72 5.2 無雜訊影像之匹配位置精確度 74 5.3 雜訊影像之匹配位置精確度 80 5.4 影像旋轉角度估計準確度 86 5.5 尺度提取準確度 90 第6章 結論與未來研究方向 95 6.1 數位影像前處理 95 6.2 特徵向量擷取與標示點區域影像選取 95 6.3 強健性模板匹配 96 6.4 實驗驗證 96 6.5 未來研究方向 97 參考文獻 98

    [1] 謝宗明, “自動化影像定位系統及其定位標記之應用”,國立成功大學,製造工程研究所,碩士論文,2005。
    [2] 曾永裕, “IC封裝內金線偏移之研究”,國立成功大學,機械工程學系,碩士論文,2003。
    [3] A. P. Witkin, “Scale-space filtering,” Proceedings of the Eighth international joint conference on Artificial intelligence, Germany, Vol. 2, pp. 1019-1022, 1983.
    [4] J. J. Koenderink, “The structure of image,” Biological Cybernetics, Vol. 50, pp. 363-370, 1984.
    [5] T. Lindeberg, “Scale-space theory in computer vision,” Kluwer Academic Publishers, Stockholm, Sweden, 1994.
    [6] L. M. Florack, and J. J. Koenderink, “Scale and the differential structure of image,” Image and Vision Computing, Vol. 10, No. 6, pp. 376-388, 1922.
    [7] 王永明、王貴錦編著,“圖像局部不變性特徵與描述”,國防工業出版社,2010.
    [8] C. Harris, and M. Stephens, “A Combined Corner and Edge Detector,” Proceedings of the 4th Vision Conference, Alvey, pp. 147-151, 1988.
    [9] S. M. Smith, and M. SUSAN, “A new approach to low level image processing,” International Journal of Computer Vision, Vol. 23, No. 1, pp. 45-78, 1997.
    [10] Z. Zheng, and H. Wang, “Analysis of gray level corner detection,” Pattern Recognition Letters, Vol. 20, No. 2, pp. 149-162, 1999.
    [11] D. Marr, and E. Hildreth, “Theory of edge detection,” Proceedings of the Royal Society of London, Series B, Vol. 207, No. 1167, pp. 187-217, 1980.
    [12] D. G. Lowe, “Object recognition from local scale-invariant features,” International Conference on Computer Vision, Canada, Vol. 2, pp. 1150-1157, 1999.
    [13] J. P. Lewis, “Fast Normalized Cross Correlation”, Vision Interface, pp. 120-123, 1995.
    [14] Y. S. Park, and W. Y. Kim, “A fast template matching method using vector summation of sub-image projection,” Proceedings KSPC 96, pp. 565 -568, 1996.
    [15] B. Kai, and D. H. Uwe, “Templete Matching using Fast Normalized Cross Correlation”, Proceedings of SPIE-The International Society for Optical Engineering, vol. 4387, pp. 95-102, 2001.
    [16] L.D.Stefano, and S. Mattoccia, “Fast template matching using bounded partial correlation,” Machine Vision and Applications, Vol. 13, No. 4, pp. 213–221, 2003.
    [17] S. Mattoccia, F. Tombari, and L. D. Stefano, “Fast full-search equivalent template matching by enhanced bounded correlation,” IEEE Transactions on Image Processing, Vol. 17, No. 4, pp. 528–538, 2008.
    [18] M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, Vol. 8, No. 2, pp. 179-187, 1962.
    [19] S. A. Dudani, K. J. Breeding, and R. B. McGhee, “Aircraft identification by moment invariants,” IEEE Transactions on Computers, Vol. C-26, No. 1, pp. 39-45, 1983.
    [20] A Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 5, pp. 489-497, 1990.
    [21] S. X. Laio and M. Pawlak, “On the accuracy of Zernike moments for image analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 12, pp. 1358-1364, 1998.
    [22] S. X. Liao and M. Pawlak, “On image analysis by moments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 18, No. 3, pp. 254-266, 1996.
    [23] H. Lin and G. P. Abousleman, “Orthogonal rotation-invariant moments for digital image processing,” IEEE Transactions on Image Processing, Vol. 17, No. 3, pp. 272-282, 2008.
    [24] H. P. Chiu, D. C. Tsen, and J. C. Cheng, “Invariant handwritten Chinese character recognition using weighted ring-data matrix, ” Proceedings of 3rd International Conference on Document Analysis and Recognition, Taiwan, Vol. 1, pp. 116-119, 1995.
    [25] S. W. Lee, and W. Y. Kim, “Rotation-invariant template matching using projection method,” Proc. KITE, pp. 475-476, 1996.
    [26] M. S. Choi, and W. Y. Kim, “ A novel two stage template matching method for rotation and illumination invariance,” Pattern Recognition, Vol. 35, No. 1, pp. 119-129, 2002.
    [27] M. R. Teague, “Image analysis via the general theory of moments,” Optical Society of America, Vol. 70, No. 8, pp. 920-930, 1980.
    [28] W. C. Lee, and C. H. Chen, “A fast template matching method for rotation invariance using two stage process, ” IEEE computer society, fifth international conference on intelligent information hiding and multimedia signal processing, Kyoto, pp. 9-12, 2009.
    [29] H.Y. Kim, and S.A. Araujo, “Grayscale template-matching invariant to rotation, scale, translation, brightness and contrast,” IEEE Pacific-Rim Symposium on Image and Video Technology, Lecture Notes in Computer Science, Vol. 4872, pp. 100-113, 2007.
    [30] H. Y. Kim, “Rotation-discriminating template matching based on Fourier coefficients of radial projections with robustness to scaling and partial occlusion,” Pattern Recognition, Vol. 43, No. 3, pp. 859-872, 2010.
    [31] W. C. Lee, and C. H. Chen, “A fast template matching method with rotation invariance by combining the circular the circular projection transform process and bounded partial correlation, ” IEEE Signal Processing Letters, Vol. 19, No. 11, pp. 737-740, 2012.
    [32] A. Goshtasby, G. C. Stockman, and C. V. Page, “A region-based approach to digital image registration with sub-pixel accuracy,” IEEE Transactions on Geoscience and Remote Sensing, Vol. 24, No. 3, pp. 390-399, 1986.
    [33] C. A. Bernstein, L. N. Kanal, D. Lavin, and E. C. Olson, “A geometric approach to sub-pixel registration accuracy,” Computer Vision, Graphics, and Image Processing, Vol. 40, No. 3, pp. 334-360, 1987.
    [34] V. N. Dvorchenko, “Bounds on (deterministic) correlation functions with applications to registration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-5, No. 2, pp. 206-213, 1983.
    [35] J. A. Parker, R. V. Kenyon and D. E. Droxel, “Comparison of interpolating methods for image resampling,” IEEE Transactions on Medical Imaging, Vol. 2, No. 1, pp. 31-39, 1983.
    [36] K. R. Castleman, “Digital image processing,” Prentice-Hall, New Jersey, 1996.
    [37] J. Wu, G. Zhou, and P. F. Shi, “The algorithm of sub-pixel image matching with high accuracy,” SPIE Image Matching and Analysis, Vol. 4552, pp. 265-273, 2001.
    [38] S. S. Gleason, M. A. Hunt, and W. B. Jatko, “Sub-pixel measurement of image features based on paraboloid surface fit,” SPIE Machine Vision Systems Integration in Industry, Vol. 1386, pp. 135-144, 1990.
    [39] K. Tanaka, M. Sano, S. Ohara, and M. Okudaira, “A parametric template method and its application to robust matching,” IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, Vol. 1, pp. 620-627, 2000.
    [40] Y. H. Lin, and C. H. Chen, “Template matching using the parametric template vector with translation, rotation and scale invariance,” Pattern Recognition, Vol. 41, No. 7, pp.2413-2421, 2008.
    [41] F. Ullah, and S. Kanekoi, “Using orientation codes for rotation-invariant template matching,” Pattern Recognition. Vol. 37, No. 2, pp. 201-209, 2004.
    [42] T. Urano, T. Tanaka, S. Kaneko, and M. Imada, “Using orientation code difference histogram(OCDH) for robust rotation-invariant search,” Proceedings of the IAPR Conference on Machine Vision Applications, Tsukuba Science City, Japan, pp. 364-367, 2005.
    [43] Z. H. Li, W. F. Shen, “Fast rotation-invariant template matching method based on orientation code,” Computer Engineering, 圖形圖像處理, Vol. 36, No.16, 2010.
    [44] Z. H. Li, C. Liu, J. Cui, and W. F. Shen, “Improved rotation invariant template matching method using relative orientation codes,” Proceedings of the 30th Chinese Control Conference, Yantai, pp. 3119-3123, 2011.
    [45] P. J. Burt, “Fast Filter Transforms for Image Processing,” Computer Graphics and Image Processing, vol. 16, No. 1, pp. 20-51, 1981.
    [46] P. J. Burt, and E. H. Adelson, “The Laplacian Pyramid as a Compact Image Code,” IEEE Transactions on Communications, vol. 31, No. 4, pp. 532-540, 1983.
    [47] E. Adelson, C. Abderson, J. R. Bergen, P. J. Burt and J. M. Ogden, “Pyramid methods in image processing,” RCA Engineer, Vol. 29, No. 6, 1984.
    [48] N. Otsu, “A Threshold Selection Method From Gray Level Histogram,” IEEE Trans. on Systems, Man, and Cybernetics, Vol. 9, No. 1, pp. 62-66, 1979.
    [49] R. Duda, R. Hart. “Use of the Hough Transform to Detect Lines and Curves in Pictures,” Communications of the ACM, Vol.15, No. 1, pp. 11- 15, 1972.
    [50] 繆紹剛,數位影像處理3/e,培生教育出版集團,2005。
    [51] 薛駿, 程俊, 姜軍, “基於圓投影矢量和的旋轉不變量檢測,” 先進技術研究通報, Vol. 3, No. 3, pp. 181-185, 2009.
    [52] 周鵬程,類神經網路入門活用MATLAB,全華圖書股份有限公司,2008。

    無法下載圖示 全文公開日期 2018/08/01 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE