簡易檢索 / 詳目顯示

研究生: 王靖升
Ching-Sheng Wang
論文名稱: 快速且有效基於區域的深度圖放大及其在無位置圖之可逆式資料隱藏應用
Fast and Effective Region-based Depth Map Upsampling with Application to Location Map-Free Reversible Data Hiding
指導教授: 鍾國亮
Kuo-Liang Chung
黃元欣
Yuan-Shin Hwang
口試委員: 貝蘇章
Soo-Chang Pei
廖弘源
Hong-Yuan Mark Liao
范國清
Kuo-Chin Fan
鍾國亮
Kuo-Liang Chung
黃元欣
Yuan-Shin Hwang
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 51
中文關鍵詞: 雙三次插值彩色加深度視頻編碼深度圖放大深度無合成誤差(D-NOSE)品質表現可逆式資料隱藏
外文關鍵詞: Bicubic interpolation, Color plus depth video coding, Depth map upsampling, Depth no-synthesis-error, Quality, Reversible data hiding
相關次數: 點閱:249下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本文提出了一種快速有效的新型區域深度圖放大方法及其在無位置圖之可逆 資料隱藏應用。在所提出的深度圖放大中,首先,將所有缺失的深度像素劃分為 三個不相交的區域:平滑區域,半平滑區域和非平滑區域。然後,我們提出深度 複製,平均值和雙三次插值方法,分別快速重建平滑,半平滑和非平滑缺失深度 像素。此外,根據每個缺失深度像素的相鄰原始深度像素的特殊約束,我們提出 了一種有效的結合深度圖放大和無位置圖之可逆資料隱藏(JUR) 方法。實驗結果 顯示我們的深度圖放大方法相對於最先進的深度圖放大方法具有執行時間以及品 質優勢。同時與最先進的資料隱藏方法相比,我們的JUR 方法具有更多藏入量和 以及更好的品質優勢。


    In this thesis, we propose a fast and effective novel region-based depth map upsampling method and its application to the location map-free reversible data hiding. In the proposed upsampling method, first, all the missing depth pixels are partitioned into three disjoint regions: the homogeneous, semi-homogeneous, and non-homogeneous regions. Then, we propose the depth copying, mean value, and bicubic interpolation approaches to reconstruct the homogeneous, semi-homogeneous, and non-homogeneous missing depth pixels quickly, respectively. Furthermore, according to the special constraint on the neighboring true depth pixels of each missing depth pixel, we propose an effective joint depth map upsampling and location map-free reversible data hiding method, called the JUR method. Based on the typical test depth maps, the comprehensive experiments have been carried out to not only justify the execution-time and quality merits of the upsampled depth maps by our upsampling method relative to the state-of-the-art methods, but also to justify the embedding capacity and quality merits of our JUR method when compared with the stateof- the-art methods.

    指導教授推薦書 I 論文口試委員審定書 II 中文摘要 III Abstract in English IV 誌謝 V Contents VI List of Figures IX List of Tables X 1 Introduction 1 1.1 Related Works 2 1.1.1 Related works for depth map upsampling 2 1.1.2 Related Works for reversible data hiding for depth maps 4 1.2 Motivations 5 1.3 Contributions 6 2 PARTITION ALL MISSING DEPTH PIXELS INTO THREE DISJOINT REGIONS 9 2.1 Fast Identify the Homogeneous Missing Depth Pixels 9 2.1.1 Definition of homogeneous missing depth pixel 10 2.1.2 Speed up the checking of Condition 1 10 2.2 Fast Identify the Semi-homogeneous Missing Depth Pixels 13 2.3 Identify the Non-homogeneous Missing Depth Pixels 16 3 THE PROPOSED REGION-BASED DEPTH MAP UPSAMPLING METHOD 17 3.1 Constructing Homogeneous Missing Depth Pixels by the Depth Copying (DC) Approach 19 3.2 Constructing Semi-homogeneous Missing Depth Pixels by the Mean Value (MV) Approach 19 3.3 Constructing Non-homogeneous Missing Depth Pixels by the Bicubic Interpolation (BI) Related Approach 20 4 THE PROPOSED JOINT UPSAMPLING AND LOCATION MAP-FREE REVERSIBLE DATA HIDING METHOD: JUR 23 4.1 Upsampling and Embedding Process 23 4.2 Extracting and Recovering Process 24 5 EXPERIMENTAL RESULTS 26 5.1 Performance Comparison among the Concerned Depth Map Upsampling Methods 27 5.1.1 PSNR and SSIM comparison 27 5.1.2 Visual effect comparison 28 5.1.3 Execution time comparison 28 5.2 Embedding Capacity and Quality Merits of Our JUR Method 30 5.2.1 Maximal embedding capacity comparison 31 5.2.2 PSNR comparison for different values of embedding capacity 32 6 Conclusions 35

    [1] O. M. Aodha, N. D. Campbell, A. Nair, and G. J. Brostow, “Patch based synthesis for single depth image super-resolution,” IEEE European Conference on Computer Vision, pp. 71-84, 2012.
    [2] K. L. Chung, W. J. Yang, and W. N. Yang, “Reversible data hiding for depth maps using the depth no-synthesis-error model,” Information Sciences, vol. 269, no.6, pp. 159-175, Jun. 2014.
    [3] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, “Asymptotic notation in Introduction to Algorithms”, 3rd ed. London, U.K.: MIT Press, 2009, sec. 3.1.
    [4] B. C. Dhara and B. Chanda, “Color image compression based on block truncation coding using pattern fitting principle,” Pattern Recognition, vol. 40, no. 9, pp. 2408- 2417, Sep. 2007.
    [5] C. Dong, C. C. Loy, and X. Tang, “Accelerating the super-resolution convolutional neural network,” Proc. European Conference on Computer Vision, pp. 391-407, 2016.
    [6] F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamicrange images,” ACM Transactions on Graphics, vol. 21, no. 3, Jul. 2002.
    [7] Execution code. Accessed: 26 Jan. 2019. [Online]. Available: ftp://140.118.175.164/ Upsample&JUR/Codes.
    [8] Experimental results. Accessed: 26 Jan. 2019. [Online]. Available: ftp:// 140.118.175.164/Upsample&JUR/Visual-effect-Exp.
    [9] C. Fehn, “Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV,” Proc. SPIE, vol. 5291, pp. 93-104, May 2004.
    [10] C. Fehn, P. Kauff, M. Op de Beeck, F. Ernst, W. Ijsselsteijn, M. Pollefeys, L. Vangool, E. Ofek, and I. Sexton, “An evolutionary and optimised approach on 3D-TV,” International Broadcast Conference, pp. 357-365, Sep. 2002.
    [11] D. Ferstl, C. Reinbacher, R. Ranftl, M. Rüther, and H. Bischof, “Image guided depth upsampling using anisotropic total generalized variation,” IEEE International Conference on Computer Vision, pp. 993-1000, Dec. 2013.
    [12] G. Gilboa, N. Sochen, and Y. Y. Zeevi, “Image enhancement and denoising by complex diffusion processes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 8, pp. 1020-1036, Aug. 2004.
    [13] B. Ham, M. Cho, and J. PonceB. Ham, “Robust image filtering using joint static and dynamic guidance,” IEEE conference on computer vision and pattern Recognition, Jun. 2015, pp. 4823–4831.
    [14] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, Jun. 2013.
    [15] T. W. Hui, C. C. Loy, and X. Tang, “Depth map super-resolution by deep multi-scale guidance,” Proc. European Conference on Computer Vision, pp. 353-369, 2016.
    [16] S. W. Jung, “Enhancement of image and depth map using adaptive joint trilateral filter,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 2, pp. 258-269, 2013.
    [17] R. G. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Transactions on Acoustics, Speech, Signal Processing, vol. 29, pp. 1153-1160, Dec. 1981.
    [18] J. Kim, J. Lee, S. Han, D. Kim, J. Min, and C. Kim, “Trilateral filter construction for depth map upsampling,” IEEE IVMSP Workshop, pp. 1-4, Sep. 2013.
    [19] Y. Konno, M. Tanaka, M. Okutomi, Y. Yanagawa, K. Kinoshita, and M. Kawade, “Depth map upsampling by self-guided residual interpolation,” International Conference on Pattern Recognition, pp. 1394-1399, Dec. 2016.
    [20] J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Transactions on Graphics, vol. 26, no. 3, Jul. 2007, Art. ID 96.
    [21] Y. Li, D. Min, M. N. Do, and J. Lu, “Fast guided global interpolation for depth and motion,” Proc. European Conference on Computer Vision, Speech and Signal Processing, pp. 717-733, 2016.
    [22] Z. Ni, Y. Q. Shi, N. Ansari, and W. Su, “Reversible data hiding,” IEEE Trans. Circuits Syst. Video Technol., vol. 16, no. 3, pp. 354-362, Mar. 2006.
    [23] F. Shao, G. Jiang, M. Yu, K. Chen, Y. S. Ho, “Asymmetric coding of multi-view video plus depth based 3D video for view rendering,” IEEE Transactions on Multimedia, vol. 14, no. 1, pp. 157-167, Feb. 2012.
    [24] X. Shi, B. Ou, and Z. Qin, “Tailoring reversible data hiding for 3D synthetic images,” Signal Processing: Image Communication, vol. 64, pp. 46-58, May 2018.
    [25] The Middlebury Stereo Datasets. [Online]. Available: http://vision.middlebury.edu/ stereo/data/
    [26] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, 2004.
    [27] L. Wang, H. Wu, and C. Pan, “Fast image upsampling via the displacement field,” IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 5123-5135, 2014.
    [28] D. S. Watkins, “Geometric approach to the least-squares problem in Fundamentals of matrix computations,” New York: Wiley, 1991, Subsection 3.5.
    [29] I. H. Witten, R. M. Neal, and J. G. Cleary, “Arithmetic coding for data compression,” Communications of the ACM, vol. 30, no. 6, pp. 520-540, Jun. 1987.
    [30] J. Xie, C. C. Chou, R. Feris, and M. T. Sun, “Single depth image super resolution and denoising via coupled dictionary learning with local constraints and shock filtering,” IEEE International Conference on Multimedia and Expo, pp. 1-6, Jul. 2014.
    [31] J. Xie, R. S. Feris, and M. T. Sun, “Edge-guided single depth image super resolution,” IEEE Transactions on Image Processing, vol. 25, no. 1, pp. 428-438, Jan. 2016.
    [32] J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861-2873, Nov. 2010.
    [33] R. Zeyde, M. Elad, and M. Protter, “On single image scale-up using sparserepresentations,” Proc. International Conference on Curves and Surfaces, pp. 711- 730, 2010.
    [34] B. Zhang and J. P. Allebach, “Adaptive bilateral filter for sharpness enhancement and noise removal,” IEEE Transactions on Image Processing, vol. 17, no. 5, pp. 664-678, May 2008.
    [35] Y. Zhao, C. Zhu, Z. Chen, and L. Yu, “Depth no-synthesis-error model for view synthesis in 3-D video,” IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2221-2228, Aug. 2011.

    無法下載圖示 全文公開日期 2024/05/28 (校內網路)
    全文公開日期 2024/05/28 (校外網路)
    全文公開日期 2024/05/28 (國家圖書館:臺灣博碩士論文系統)
    QR CODE