研究生: |
張家瑜 Chia-Yu Chang |
---|---|
論文名稱: |
Wiggle Stereoscopy之生成研究 A Study on Generation of Wiggle Stereoscopy |
指導教授: |
楊傳凱
Chuan-Kai Yang |
口試委員: |
孫沛立
Pei-Li Sun 花凱龍 Kai-Lung Hua |
學位類別: |
碩士 Master |
系所名稱: |
管理學院 - 資訊管理系 Department of Information Management |
論文出版年: | 2016 |
畢業學年度: | 104 |
語文別: | 中文 |
論文頁數: | 57 |
中文關鍵詞: | 裸眼三維立體顯示 、影像內插 、wiggle stereoscopy |
外文關鍵詞: | multi-view autostereoscopic display, view interpolation, wiggle stereoscopy |
相關次數: | 點閱:204 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
由於立體視覺技術能夠提供人們多元的體驗,因此立體視覺技術越來越受到歡迎,也被廣泛運用在不同的層面,其中裸眼三維立體顯示,除了提供使用者身歷其境的經驗之外,也不須具備特殊穿戴設備,並能夠讓使用者更加輕易體驗立體視覺。目前網際網路上,有許多使用者運用wiggle stereoscopy的方式來呈現照片,但若影像內容本身沒有很好的調適,立體效果的感知是很有限的。
本研究提供平滑裸眼三維立體顯示,模擬人眼在相同位置改變視角所看到的景象,透過移動視差(motion parallax)提供影像深度資訊[1],並在影像序列的影像內插(view interpolation)所產生出的雜訊,如:模糊、殘影,做出適當的修補,另外加入人類視覺系統的考量,找出聚焦點,使裸眼三維立體顯示的效果更為真實。系統一開始分析一對立體影像,使用Changjae Oh 等學者[5]所提出的立體影像匹配技術 “Probabilistic Correspondence Matching using Random Walk with Restart ”,可以得到視差圖(disparity map),接著針對前者錯誤的匹配點進行修補,利用修補完的視差圖,進行渲染檢查,可以減少影像的模糊以及殘影的產生,最後利用影像視覺特徵圖(saliency map),找出人眼聚焦點,對影像做對齊。
As the techniques in stereo vision can provide user diversity experience, it becomes more popular now, and can also be used in various fields. One of these techniques called multi-view autostereoscopic display. It provides user immersive experience. In addition, it does not need any wearable device to achieve three-dimensional effect. This technique provides an easier way for amateur users. Currently there are many creations of wiggle stereoscopy, and the creation can be shared easily. However, if the image content is not processed properly, the stereoscopic effect cannot be shown very well.
This paper proposes a flat automultiscopic display by simulating the views where a person should see with different viewing angles for a fixed position. Motion parallax provides the depth cue from images[1]. Given an image sequence we make some rectification to remove artifacts like motion blur and stereo 3D ghosting. Also we take a human visual system into account to enhance automultiscopic displays’ effect. Our system adopts the idea of Changjae Oh et al[5]. Utilize “Probabilistic Correspondence Matching using Random Walk with Restart”. Once we get the disparity map, we repair the invalid region which contain error matching in the disparity map, and do the rendering check, so that we can reduce image blur and stereo 3D ghosting. At last, we using saliency map to decide fixation of human vision and make image alignment.
[1].Martin Rerabek, Lutz Goldmann, Jong-Seok Lee, Touradj Ebrahimi. Motion Parallax Based Restitution of 3D Images on Legacy Consumer Mobile Devices. In Multimedia Signal Processing, IEEE International Workshop on, 2011.
[2].Heung-Yeung Shum and Sing Bing Kang. A Review of Image-based Rendering Techniques. In Visual Communications and Image Processing, 2000.
[3].Michael Schmeing and Xiaoyi Jiang. Depth Image Based Rendering. In Pattern Recognition, Machine Intelligence and Biometrics, pp.279-301, 2011.
[4].D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. In International Journal of Computer Vision, 47(1/2/3):7-42, 2002.
[5].Changjae Oh, Bumsub Ham and Kwanghoon Shon. Probabilistic Correspondence Matching using Random Walk with Restart. In British Machine Vision Conference, 2012.
[6].Xia Hu, Caiming Zhang, Wei Wang, and Xifeng Gao. Disparity Adjustment for Local Stereo Matching. In IEEE International Conference on Computer and Information Technology, 2010
[7].Manuel Lang, Alexander Hornung, Oliver Wang, Steven Poulakos, Aljoscha Smolic and Markus Gross. Nonlinear Disparity Mapping for Stereoscopic 3D. In ACM Transactions on Graphics, 2010.
[8].Bumsub Ham, Dongbo Min, Changjae Oh, Minh N. Do and Kwanghoon Sohn. Probability-Based Rendering for View Synthesis. In IEEE Transaction on Image Processing, VOL. 23, NO.2, Feb.2014.
[9].Piotr Didyk, Pitchaya Sitthi-Amorn, William Freeman, Fredo Durand and Wojciech Matusik. Joint View Expansion and Filtering for Automultiscopic 3D Displays. In ACM Transaction on Graphics, VOL. 32, NO.6, 2013.
[10].Wenjing Geng, Ran Ju, Xiangyang Xu, Tongwei Ren and Gangshan Wu. Flat3D: Browsing Stereo Images on a Conventional Screen. In Springer International Publishing: Multimedia Modeling, 2015.
[11].Hosik Sohn, Yong Ju Jung, Seong-il Lee and Yong Man Ro. Predicting Visual Discomfort Using Object Size and Disparity Information in Stereoscopic Images. In IEEE Transactions on Broadcasting, VOL. 59, NO.1, 2013.
[12].Yong Ju Jung, Hosik Sohn, Seong-Il Lee, Hyun Wook Park and Yong Man Ro. Predicting Visual Discomfort of Stereoscopic Images Using Human Attention Model. In IEEE Transactions on circuits and system for video technology, VOL.23, MP.12, 2013.
[13].O.Le Meur, P. Le Callet, and D. Barba. Predicting visual fixation on video based on low-level visual features. In Vision research, VOL.47, NO.19, pp.2483-2498, 2007.
[14].J. Choi, D. Kim, S. Choi and K.Sohn. Visual fatigue evaluation and enhancement for 2D-plus-depth video. In IEEE ICIP. pp.2981-2984, 2010.
[15].M. Lambooij, W. A. IJsselsteijn and I. Heynderickx. Visual discomfort of 3D TV: Assessment methods and modeling. In Displays, VOL. 32, NO. 4, pp.209-218, 2011.
[16].D. Kim and K. Sohn. Visual fatigue prediction for stereoscopic image. In IEEE Trans. Circuits Syst. Video Technol, VOL.21, NO. 2, pp.231-236, 2011.
[17].Chien-Yu Hou and Chuan-Kai Yang. Stereoscopic 3D Stippling. Taipei: NTUST, Department of Information Management.
[18].Dorin Comaniciu and Peter Meer. Mean Shift: A Robust Approach Toward Feature Space Analysis. In IEEE Transactions on Pattern Analysis and Machine Intelligence, VOL.24, NO.5, 2002
[19].EDISON http://coewww.rutgers.edu/riul/research/code/EDISON/doc/help.html
[20].Yongjin Kim, Holger Winnemoller and Seungyong Lee. WYSIWYG Stereo Painting with Usability Enhancements. In IEEE Transactions on Visualization and Computer Graphics, 2014.
[21].Jonathan Harel, Christof Koch and Pietro Perona. Graph-based Visual Saliency. In Advanced in Neural Information processing systems, 2006.
[22].Ran Ju, Ling Ge, Wenjing Geng, Tonwei Ren and Gangshan Wu. Depth Based on Anistropic Center-Surround Difference. In IEEE ICIP, 2014.
[23].D. Scharstein and R. Szeliski. High-accuracy stereo depth maps using structured light. In IEEE Computer Vision and Pattern Recognition, VOL.1, pp.195-202, 2003.
[24].D. Scharstein and C. Pal. Learning conditional random fields for stereo. In IEEE Computer Vision and Pattern Recognition, 2007.