簡易檢索 / 詳目顯示

研究生: 王泓權
Hong-Cyuan Wang
論文名稱: 基於隨機漫步與條件隨機場之新穎影像合成演算法
New Image Fusion Algorithms based on Random Walk and Conditional Random Fields
指導教授: 花凱龍
Kai-Lung Hua
口試委員: 賴文能
Wen-Nung Lie
鄭文皇
Wen-Huang Cheng
葉家宏
Chia-Hung Yeh
張傳育
Chuan-Yu Chang
賴尚宏
Shang-Hong Lai
郭景明
Jing-Ming Guo
學位類別: 博士
Doctor
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 英文
論文頁數: 110
中文關鍵詞: 影像融合隨機漫步多聚焦影像聯合高斯條件隨機場背景擷取背景初始化最大後驗機率
外文關鍵詞: Image fusion, Random Walk, Multi-Focus Image, Joint Gaussian Conditional Random Fields, Background Extraction, Background Initialization, MAP Estimation
相關次數: 點閱:358下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於科技的進步,數位相機暨智慧型手機之攝影設備越來越普遍,人們可以更容易地透過這些設備,取得圖像資訊。近幾年來,複合式圖片的應用也隨著技術的進步,悄悄地在世上廣泛的流行,例如:高動態範圍成像就是一個廣為人知的應用。當人們使用擁有高動態範圍成像功能的相機拍攝影像時,相機會拍攝多張不同曝光度的照片,並且融合成一張擁有場景中所有細節的圖片。在相似的概念延伸下,我們提出兩種新穎的影像合成演算法來解決背景擷取與多重焦距影像合成的兩個問題。本論文第一部分會介紹如何利用隨機漫步模型從影片中取得背景圖片;第二部分會介紹如何使用聯合高斯條件隨機場從不同焦距的圖片中得到清楚對焦的圖片。
    第一部分,一般而言背景擷取是假設存在一張乾淨的背景圖片在輸入的影片當中,但事實上,許多情況可能違反這個假設,例如高速公路交通影片。因此,我們提出以機率模型的方式來合成影片中的候選背景區塊,並將這個方法導向於隨機漫步的問題,並基於時間和空間的關係尋求全域最佳解。此外,我們還設計了兩個品質量測來考慮像素之間的空間和時間一致性和對比清晰度作為背景像素選擇的基礎。在幀之間靜態背景具有高時間一致性,因此,我們利用時間對比度濾波器和光流法擷取出無動態的區塊來提高我們合成的準確度。實驗結果顯與,我們的演算與過往最佳的演算法比較,可以成功的擷取出無瑕疵的背景圖片,並且有著更低的計算複雜度。
    第二部分,一般由於光學鏡頭先天上的限制,在拍攝照片時不可能將所有物體都對於焦距內,而多焦距影像合成演算法可以用於解決這個問題。然而在真實世界的場景中,不只包含靜態物體也包含移動物體。大多數現有的演算法提供對於靜態場景好的解決辦法,但這不包含場景內存在移動的物體,現有的演算法對於場景內存在移動物體所處理的結果都會產生出類似鬼影的現象。為了解決這個問題,我們提出新的多焦距影像合成演算法用於動態場景上。首先,我們的演算法會使用動態場景分析來偵測移動物體,接著我們設計了兩個特徵值,對焦的特徵值以及連續性的特徵值來量測輸入的多焦距影像,並將多焦距影像合成問題導向進聯合高斯條件隨機場模型。最後,採用最大後驗(MAP)來計算每個輸入圖像的像素的權重,並將輸入圖像融合成全聚焦圖像。實驗結果顯示所提出的演算法在靜態和動態場景都比過往的演算法來的優異。


    Thanks to new technological advances, digital cameras and smart phone cameras become more and more popular. People can more easily obtain various image information through these devices. The multiple image technologies have been prevailed in the world. For example, the high dynamic range imaging (HDRI) is a well-known application by which a user is able to take multi-exposure photos and fuse these images into one single HDR image that provides all the details in a scene. Inspired by this concept, we proposed two novel image fusion algorithms on background extraction and multi-focus image fusion. In this work, we will first describe how we utilize random walk model to extract a clean background image from video. Secondly, we will introduce how we employ joint Gaussian conditional random fields model to obtain an all-in-focus merged image from multi-focus images.
    In the first part, generally, background extraction assumes the existence of a clean background shot through the input sequence, but realistically, situations may violate this assumption such as highway traffic videos. Therefore, our probabilistic model-based method formulates fusion of candidate background patches of the input sequence as a random walk problem and seeks a globally optimal solution based on their temporal and spatial relationship. Furthermore, we also design two quality measures to consider spatial and temporal coherence and contrast distinctness among pixels as background selection basis. A static background should have high temporal coherence among frames, and thus, we improve our fusion precision with a temporal contrast filter and an optical-flow-based motion-less patch extractor. Experiments demonstrate that our algorithm can successfully extract artifact-free background images with low computational cost while comparing to state-of-the-art algorithms.
    In the second part, generally, due to the limited depth-of-field of optical lenses, it is usually impossible to obtain an image which all relevant objects are in focus. To overcome this problem, it can be solved by multi-focus image fusion. However, the scene of the real world is not only static objects but also moving object. Most of the existing algorithms can solve the multi-focus image fusion problem that do not contain any moving objects. The existing algorithms result in ghosting artifact by the presence of moving objects. To solve this issue, we proposed a novel multi-focus image fusion algorithm for dynamic scenes. First, our algorithm detects the moving objects by dynamic scene analysis. Second, we design two features, focus-feature and consistency-feature, to evaluate each input image. Then, we formulate the multi-focus image fusion problem into a joint Gaussian conditional random fields model. Finally, we use the maximum a posteriori (MAP) to compute the weight of each pixel in each input image to fuse an all-in-focus image. Experimental results show that the proposed method outperforms the state-of-the-art methods for both static and dynamic scenes.

    Abstract in Chinese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Abstract in English . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii List of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix 1 Background Extraction Using Random Walk Image Fusion . . . . . . . . . . . 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.2 Random Walks for Background Extraction . . . . . . . . . . . . 9 1.3.3 Quality Measure Function . . . . . . . . . . . . . . . . . . . . . 12 1.3.4 Algorithm Summarization . . . . . . . . . . . . . . . . . . . . . 15 1.4 Optical Flow Motion-less Acceleration . . . . . . . . . . . . . . . . . . . 15 1.5 Experimental Results and Discussion . . . . . . . . . . . . . . . . . . . . 18 1.5.1 Video Sequences and Ground Truth . . . . . . . . . . . . . . . . 18 1.5.2 Analysis of Free Parameters . . . . . . . . . . . . . . . . . . . . 20 1.5.3 Results and Comparisons . . . . . . . . . . . . . . . . . . . . . . 21 1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2 Multi-Focus Image Fusion Based on Joint Gaussian Conditional Random Fields 46 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.2 Moving Object Detection . . . . . . . . . . . . . . . . . . . . . . 51 2.2.3 Focus-Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.2.4 Consistency-Feature . . . . . . . . . . . . . . . . . . . . . . . . 54 2.2.5 Joint Gaussian Conditional Random Field for Multi-Focus Image Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 2.2.6 Weight of Multi-Focus Image . . . . . . . . . . . . . . . . . . . 58 2.3 Experimental Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.3.1 Performance Evaluation for Static Scenes . . . . . . . . . . . . . 60 2.3.2 Performance Evaluation for Dynamic Scenes . . . . . . . . . . . 82 2.3.3 Evaluation of Computational Complexity . . . . . . . . . . . . . 100 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 3 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

    [1] A. Colombari and A. Fusiello, “Patch-based background initialization in heavily cluttered video,” IEEE Transactions on Image Processing, vol. 19, no. 4, pp. 926–933, 2010.
    [2] C.-C. Chen and J. K. Aggarwal, “An adaptive background model initialization algorithm with objects moving at different depths,” IEEE International Conference on Image Processing, pp. 2664 – 2667, 2008.
    [3] T. Georgiev, “Photoshop healing brush: a tool for seamless cloning,” Workshop on Applications of Computer Vission (ECCV), pp. 1–8, 2004.
    [4] L. Yang, H. Cheng, J. Su, and X. Li, “Pixel-to-model distance for robust background reconstruction,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 5, pp. 903–916, 2016.
    [5] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” IEEE International Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246–252, 1999.
    [6] S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Transactions on Image Processing., vol. 22, no. 7, pp. 2864–2875, 2013.
    [7] S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Information Fusion., vol. 14, no. 2, pp. 147–162, 2013.
    [8] H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and Image Processing., vol. 57, no. 3, pp. 235–245, 1995.
    [9] W. Wang and F. Chang, “A multi-focus image fusion method based on laplacian pyramid,” Journal of Computers., vol. 6, no. 12, pp. 2559–2566, 2011.
    [10] J. Tian and L. Chen, “Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure,” Signal Processing., vol. 92, no. 9, pp. 2137–2146, 2012.
    [11] H. Zhao, Z. Shang, Y. Y. Tang, and B. Fang, “Multi-focus image fusion based on the neighbor distance,” Pattern Recognition., vol. 46, no. 3, pp. 1002–1011, 2013.
    [12] S. Pertuz, D. Puig, M. A. Garcia, and A. Fusiello, “Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images,” IEEE Transactions on Image Processing., vol. 22, no. 3, pp. 1242–1251, 2013.
    [13] K.-L. Hua, H.-C. Wang, A. H. Rusdi, and S.-Y. Jiang, “A novel multi-focus image fusion algorithm based on random walks,” Journal of Visual Communication and Image Representation, vol. 25, no. 5, pp. 951–962, 2014.
    [14] L. Cao, L. Jin, H. Tao, G. Li, Z. Zhuang, and Y. Zhang, “Multi-focus image fusion based on spatial frequency in discrete cosine transform domain,” Signal Processing Letters., vol. 22, no. 2, pp. 220–224, 2015.
    [15] W.-H. Cheng, C.-W. Wang, and J.-L. Wu, “Video adaptation for small display based on content recomposition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 1, pp. 43–58, 2007.
    [16] R. Kapoor and A. Dhamija, “Fast tracking algorithm using modified potential function,” IET Computer Vision, vol. 6, no. 2, pp. 111–120, 2012.
    [17] S.-C. Huang and B.-H. Do, “Radial basis function based neural network for motion detection in dynamic scenes,” IEEE Transactions on Cybernetics, vol. 44, no. 1, pp. 114–125, 2014.
    [18] P. Chiranjeevi and S. Sengupta, “Detection of moving objects using multi-channel kernel fuzzy correlogram based background subtraction,” IEEE Transactions on Cybernetics, vol. 44, no. 6, pp. 870–881, 2014.
    [19] O. Barnich and M. Van Droogenbroeck, “Vibe: A universal background subtraction algorithm for video sequences,” IEEE Transactions on Image Processing, vol. 20, no. 6, pp. 1709–1724, 2011.
    [20] N. Liu, H. Wu, and L. Lin, “Hierarchical ensemble of background models for ptz-based video surveillance,” IEEE Transactions on Cybernetics, vol. 45, no. 1, pp. 89–102, 2015.
    [21] L. Lin, Y. Lu, C. Li, H. Cheng, and W. Zuo, “Detection-free multiobject tracking by reconfigurable inference with bundle representations,” IEEE Transactions on Cybernetics, vol. 46, no. 11, pp. 2447–2458, 2016.
    [22] Change Detection Workshop. http://changedetection.net/.
    [23] N. Goyette, P.-M. Jodoin, F. Porikli, J. Konrad, and P. Ishwar, “Changedetection.net: A new change detection benchmark dataset,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 1–8, 2012.
    [24] Y. Wang, P.-M. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, and P. Ishwar, “Cdnet 2014: An expanded change detection benchmark dataset,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 387–394, 2014.
    [25] P. Chiranjeevi and S. Sengupta, “Interval-valued model level fuzzy aggregation-based background subtraction,” IEEE Transactions on Cybernetics, pp. 1–12, 2016.
    [26] X. Cao, L. Yang, and X. Guo, “Total variation regularized rpca for irregularly moving object detection under dynamic background,” IEEE Transactions on Cybernetics, vol. 46, no. 4, pp. 1014–1027, 2016.
    [27] W.-H. Cheng, C.-W. Hsieh, S.-K. Lin, C.-W. Wang, and J.-L. Wu, “Robust algorithm for exemplarbased image inpainting,” Proc. Int. Conf. Computer Graphics Imaging and Vision, pp. 64–69, 2005.
    [28] D. Gutchess, M. Trajkovics, E. Cohen-Solal, D. Lyons, and A. Jain, “A background model initialization algorithm for video surveillance,” IEEE International Conference on Computer Vision, vol. 1, pp. 733–740, 2001.
    [29] W. Long and Y.-H. Yang, “Stationary background generation: an alternative to the difference of two images,” Pattern Recognition, vol. 23, no. 12, pp. 1351–1359, 1990.
    [30] V. Reddy, C. Sanderson, and B. C. Lovell, “An efficient and robust sequential algorithm for background estimation in video surveillance,” IEEE International Conference on Image Processing, pp. 1109–1112, 2009.
    [31] V. Reddy, C. Sanderson, and B. C. Lovell, “A low-complexity algorithm for static background estimation from cluttered image sequences in surveillance contexts,” Journal on Image and Video Processing, 2011.
    [32] D. Ortego, J. C. SanMiguel, and J. M. Martínez, “Rejection based multipath reconstruction for background estimation in video sequences with stationary objects,” Computer Vision and Image Understanding, vol. 147, no. C, pp. 23–37, 2016.
    [33] Scene Background Modeling and Initialization. http://sbmi2015.na.icar.cnr.it/.
    [34] L. Maddalena and A. Petrosino, Background Modeling and Foreground Detection for Video Surveillance, ch. Background Model Initialization for Static Cameras, pp. 1–16. Chapman and Hall/CRC, 2014.
    [35] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: Principles and practice of background maintenance,” IEEE International Conference on Computer Vision, vol. 1, pp. 255–261, 1999.
    [36] A. Sobral, T. Bouwmans, and E. Zahzah, “Comparison of matrix completion algorithms for background initialization in videos,” SBMI 2015 Workshop in conjunction with ICIAP 2015, pp. 510–518, 2015.
    [37] Scene Background Initialization (SBI) dataset. http://sbmi2015.na.icar.cnr.it/SBIdataset.html.
    [38] L. Maddalena and A. Petrosino, “Towards benchmarking scene background initialization,” SBMI 2015 Workshop in conjunction with ICIAP 2015, pp. 469–476, 2015.
    [39] A. Criminisi, P. Pérez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Transactions on Image Processing, vol. 13, no. 9, pp. 1200–1212, 2004.
    [40] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and texture image inpainting,” IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 882–889, 2004.
    [41] M. Kumar and S. Dass, “A total variation-based algorithm for pixel-level image fusion,” IEEE Transactions on Image Processing, vol. 18, no. 9, pp. 2137–2143, 2009.
    [42] S. Zheng, W. Shi, J. Liu, G. Zhu, and J.-W. Tian, “Multisource image fusion method using support value transform,” IEEE Transactions on Image Processing, vol. 16, no. 7, pp. 1831–1839, 2007.
    [43] S. Li, J. T.-Y. Kwok, I. W.-H. Tsang, and Y. Wang, “Fusing images with different focuses using support vector machines,” IEEE Transactions on Neural Networks, vol. 15, no. 6, pp. 1555–1561, 2004.
    [44] H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the hsi space using the sum-modifiedlaplacian
    and a coarse edge map,” Image and Vision Computing, vol. 26, no. 9, pp. 1285–1295, 2008.
    [45] T. Mertens, J. Kautz, and F. V. Reeth, “Exposure fusion,” Computer Graphics and Applications, pp. 382–390, 2007.
    [46] L. Bogoni and M. Hansen, “Pattern-selective color image fusion,” Pattern Recognition, vol. 34, no. 8, pp. 1515 – 1526, 2001.
    [47] X. Qin, J. Shen, X. Mao, X. Li, and Y. Jia, “Robust match fusion using optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 8, pp. 1549–1560, 2015.
    [48] J. Shen, Y. Zhao, S. Yan, and X. Li, “Exposure fusion using boosting laplacian pyramid,” IEEE Transactions on Cybernetics, vol. 44, no. 9, pp. 1579–1590, 2014.
    [49] G. Piella, “Image fusion for enhanced visualization: A variational approach,” International Journal of Computer Vision, vol. 83, no. 1, pp. 1–11, 2009.
    [50] V. Petrovic and C. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Transactions on Image Processing, vol. 13, no. 2, pp. 228–237, 2004.
    [51] L. Grady, “Random walks for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 11, pp. 1768–1783, 2006.
    [52] J. Shen, Y. Du, W. Wang, and X. Li, “Lazy random walks for superpixel segmentation,” IEEE Transactions on Image Processing, vol. 23, no. 4, pp. 1451–1462, 2014.
    [53] X. Dong, J. Shen, L. Shao, and L. V. Gool, “Sub-markov random walk for image segmentation,” IEEE Transactions on Image Processing, vol. 25, no. 2, pp. 516–527, 2016.
    [54] Y. Liang, J. Shen, X. Dong, H. Sun, and X. Li, “Video supervoxels using partially absorbing random walks,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 5, pp. 928–938, 2016.
    [55] J.-G. Yu, J. Zhao, J. Tian, and Y. Tan, “Maximal entropy random walk for region-based visual saliency,” IEEE Transactions on Cybernetics, vol. 44, no. 9, pp. 1661–1672, 2014.
    [56] X. Li, Z. Han, L. Wang, and H. Lu, “Viisual tracking via random walks on graph model,” IEEE Transactions on Cybernetics, vol. 46, no. 9, pp. 2144–2155, 2016.
    [57] R. Shen, I. Cheng, J. Shi, and A. Basu, “Generalized random walks for fusion of multi-exposure images,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3634–3646, 2011.
    [58] H. Wechsler and M. Kidode, “A random walk procedure for texture discrimination,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 1, no. 3, pp. 272–280, 1979.
    [59] N. Biggs, “Algebraic potential theory on graphs,” Bull. London Mathematical Society., vol. 26, no. 6, pp. 641–682, 1997.
    [60] S. Kakutani, “Markov processes and the dirichlet problem,” Proc. Japanese Academy, vol. 21, pp. 227–233, 1945.
    [61] P. Doyle and L. Snell, “Random walks and electric networks,” Washington, D.C.: Mathematical Association of America, 1984.
    [62] R. Courant and D. Hilbert, “Methods of math. physics,” John Wiley and Sons, 1989.
    [63] R. Hersh and R. Griego, “Brownian motion and potential theory,” Scientific American, vol. 220, no. 3, pp. 67–74, 1969.
    [64] D. Marr and E. Hildreth, “Theory of edge detection,” Proc. of the Royal Society, vol. 207, no. 1167, pp. 187–217, 1980.
    [65] B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” International Joint Conferences on Artificial Intelligence, vol. 2, pp. 674–679, 1981.
    [66] S. Baker and I. Matthews, “Lucas-kanade 20 years on: A unifying framework,” International Journal of Computer Vision, vol. 56, no. 3, pp. 221–255, 2004.
    [67] R. C. Luo, C.-C. Yih, and K. L. Su, “Multisensor fusion and integration: approaches, applications, and future research directions,” IEEE Sensors Journal, vol. 2, pp. 107–119, August 2002.
    [68] V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Systems with Applications., vol. 37, no. 12, pp. 8861–8870, 2010.
    [69] S. Li, J. T. Kwok, and Y. Wang, “Multifocus image fusion using artificial neural networks,” Pattern Recognition Letters., vol. 23, no. 8, pp. 985–997, 2002.
    [70] S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image and Vision Computing., vol. 26, no. 7, pp. 971–979, 2008.
    [71] I. De, B. Chanda, and B. Chattopadhyay, “Enhancing effective depth-of-field by image fusion using mathematical morphology,” Image and Vision Computing., vol. 24, no. 12, pp. 1278–1287, 2006.
    [72] Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Signal Processing., vol. 19, no. 2, pp. 186–193, 2009.
    [73] J. Tian and L. Chen, “Multi-focus image fusion using wavelet-domain statistics,” IEEE International Conference on Image Processing (ICIP)., pp. 1205–1208, 2010.
    [74] S. Li and B. Yang, “Multifocus image fusion by combining curvelet and wavelet transform,” Pattern Recognition Letters., vol. 29, no. 9, pp. 1295–1301, 2008.
    [75] M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing., vol. 14, no. 12, pp. 2091–2106, 2005.
    [76] W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recognition Letters., vol. 28, no. 4, pp. 493–500, 2007.
    [77] M. Subbarao, T.-S. Choi, and A. Nikzad, “Focusing techniques,” Optical Engineering., vol. 32, no. 11, pp. 2824–2836, 1993.
    [78] V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Optics Communications., vol. 282, no. 16, pp. 3231–3242, 2009.
    [79] M. F. Tappen, C. Liu, E. H. Adelson, and W. T. Freeman, “Learning gaussian conditional random fields for low-level vision,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR)., pp. 1–8, 2007.
    [80] X. Wang, X.-P. Zhang, I. Clarke, and Y. Yakubovich, “A new gaussian mixture conditional random field model for indoor image labeling,” Proceedings of ACM the 1st international workshop on Interactive multimedia for consumer electronics., pp. 51–56, 2009.
    [81] R. Shen, I. Cheng, and A. Basu, “Qoe-based multi-exposure fusion in hierarchical multivariate gaussian crf,” IEEE Transactions on Image Processing., vol. 22, no. 6, pp. 2469–2478, 2013.
    [82] J. D. Lafferty, A. McCallum, and F. C. N. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” in Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pp. 282–289, 2001.
    [83] M. B. Dillencourt, H. Samet, and M. Tamminen, “A general approach to connected-component labeling for arbitrary image representations,” Journal of the ACM (JACM)., vol. 39, no. 2, pp. 253–280, 1992.
    [84] M.-Y. Liu, O. Tuzel, S. Ramalingam, and R. Chellappa, “Entropy rate superpixel segmentation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR)., pp. 2097–2104, 2011.
    [85] S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Transactions on Pattern Analysis and Machine Intelligence., vol. 16, no. 8, pp. 824–831, 1994.
    [86] K.-S. Choi, J.-S. Lee, and S.-J. Ko, “New autofocusing technique using the frequency selective weighted median filter for video cameras,” IEEE Transactions on Consumer Electronics., vol. 45, no. 3, pp. 820–827, 1999.
    [87] H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications. Chapman and Hall/CRC, Boston, MA, 2005.
    [88] L. A. Zadeh, “Fuzzy sets,” Information and Control., vol. 8, no. 3, pp. 338–353, 1965.
    [89] Photo(shop) Contests. http://www.pxleyes.com/photography-contest/19726.
    [90] Imagefusion. http://www.imagefusion.org.
    [91] Computational Imaging Group at Xiamen University. http://www.quxiaobo.org/index\_software.html.
    [92] G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electronics Letters., vol. 38, no. 7, pp. 313–315, 2002.
    [93] C. Xydeas and V. Petrović, “Objective image fusion performance measure,” Electronics Letters., vol. 36, no. 4, pp. 308–309, 2000.
    [94] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3d filtering,” Electronic Imaging., pp. 354–365, 2006.

    無法下載圖示 全文公開日期 2022/06/05 (校內網路)
    全文公開日期 2027/06/05 (校外網路)
    全文公開日期 2032/06/05 (國家圖書館:臺灣博碩士論文系統)
    QR CODE