簡易檢索 / 詳目顯示

研究生: 蕭毓庭
Yu-Ting Hsiao
論文名稱: 光學穿透式頭戴顯示器影像品質優化與自動化前景影像分割
Optical See-through Head-mounted Display Image Quality Optimization and Automatic Foreground Image Segmentation
指導教授: 孫沛立
Pei-Li Sun
口試委員: 溫照華
Chao-Hua Wen
孫沛立
Pei-Li Sun
陳鴻興
Hung-Shing Chen
林宗翰
Tzung-Han Lin
學位類別: 碩士
Master
系所名稱: 應用科技學院 - 色彩與照明科技研究所
Graduate Institute of Color and Illumination Technology
論文出版年: 2018
畢業學年度: 106
語文別: 中文
論文頁數: 90
中文關鍵詞: 光學透視頭戴顯示器混合實境影像優化影像融合感興趣物件分割
外文關鍵詞: optical see-through head-mounted display (OST-HMD), mixed reality, image quality optimization, image fusion, ROI image segmentation
相關次數: 點閱:278下載:8
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 光學透視式頭戴顯示器(OST-HMD)近年來經常被使用於擴增實境與混合實境,即為將虛擬物件與真實環境融合成為一個新的體驗,現今應用領域有無人機駕駛、醫療、教育與商業娛樂等等。然而光學透視式頭戴顯示器的影像顯示內容與品質,攸關於虛擬物件的可視性,因此基於優化虛擬物件顯示內容,本研究主要透過心理物理實驗,模擬虛擬物件於不同真實環境狀態下,影像處理優化模式,最後提出一個可應用於低成本OST-HMD的前景物件影像分割技術。
    本研究針對OST-HMD的成像特性,設計虛擬影像與真實環境疊合(overlay)的優化參數。本研究使用的OST-HMD(HM-OLED),成像方式會遇到幾個問題,首先當虛擬影像亮度過高,會導致虛擬影像過亮而不自然。而當影像內容有暗部時,導致黑色或暗色影像不清晰甚至無法顯示黑色影像資訊內容。另外虛擬影像亦會受環境光影響,因此在一系列的影像後處理,經過心理物理實驗讓受測者調整設計之影像處理參數,盡可能讓虛擬影像和真實影像同時清晰可見,並且產生沉浸感。
    然而真實環境中,欲疊合虛擬影像到真實物件上,除了上述由虛擬顯示特性導致的暗部不清晰問題之外,亦有可能遇到真實物件亮度過高的現象,而影響虛擬影像的能見度,例如欲覆蓋的真實物件為:鏡射光材質(specular highlight material)物件、自發光物件等,為了使OST-HMD顯示之虛擬物件能和其亮度抗衡,本研究以高光澤亮點程度模擬的方式,並用影像檢測高光邊緣像素值,來推測該高光的亮度峰值。找出高光澤亮度與顯示參數的關係,產生可以適用於不同局部高亮度的優化OST-HMD顯示模式。
    最後,有著前述的影像品質探討,本研究最後提出一個低成本的前景物件影像分割技術,基於單一相機的平移視差,推算場景物件的相對深度距離,加上基於圖論與能量函數(GrabCut)的影像分割技術,利用前景分割與GrabCut的演算法,分出最後的影像感興趣區域,讓使用者能以不需動手的方式,即可將前景物件影像區塊從場景中分割出來,並重置於欲加入的真實場景中。此研究可進一步應用於物聯網(IoT)、遊戲互動等領域。


    Nowadays, mixed reality and augmented reality are getting more and more popular. The technologies can be used in entertainment, medical applications and military inspections, etc. People could experience augmented reality by using certain devices, such as mobile phone, optical see-through head-mounted display (OST-HMD), transparent display and mixed reality headsets. In this study, we choose optical see-through HMD as the experiment device. However, the virtual image quality of OST-HMD would have some impacts on visibility and fidelity of the image information. With a view to optimizing virtual image quality, a psycho-visual experiment is conducted to obtain optimal parameters of the model under different background characteristics. Finally, the OST-HMD optimal image rendering model is applied in automatic ROI image segmentation with low-cost hardware requirement.
    Firstly, optimized parameters are designed based on the problems when virtual image is overlaid on real scene. If the white point luminance of the display is much higher than the scene, the virtual objects are too bright against the scene. Further, the dark areas of the virtual objects become transparent which make the behind objects visible. And, the tone and colorfulness of the virtual objects are changed by mixing the luminance with the behind objects. By image rendering method and psychophysical experiment, a proper image luminance ratio is obtained. Also, the details of image dark region is visible.
    Sometimes, there are some highlight regions in the real scene. It’s not easy to overlay virtual image on that. The different roughness would have different highlight peak. In this study, a high glossiness image thresholding method is used to detect the glossy area and estimate the influence on optimal image rendering parameters.
    Finally, an automatic foreground (region of interest, ROI) segmentation is proposed. By segmenting the foreground region of disparity map, a rough ROI is detected. With this result, it is applied in GrabCut algorithm to get a more accurate segmentation without manual control. Moreover, the ROI image is also rendered to display on OST-HMD.

    中文摘要 i ABSTRACT ii 致謝 iii 目錄 iv 圖目錄 viii 表目錄 xi 第1章 緒論 1 1.1 研究背景 1 1.2 研究動機與目的 1 1.3 研究範圍與限制 2 1.4 論文架構 2 1.5 發表論文 3 第2章 文獻探討 4 2.1 頭戴顯示器的分類及原理 4 2.1.1 一般平面顯示器顯示技術 4 2.1.2 頭戴式顯示器之種類 5 2.1.3 光學透視式頭戴顯示器顯示技術 7 2.2 頭戴顯示器設計與評估參數 8 2.3 顯示影像品質評估與量測 12 2.3.1 顯示器物理量測標準 12 2.3.2 頭戴顯示器量測 12 2.3.3 透視顯示器階調研究 13 2.4 影像分割的方式及原理 14 2.4.1 影像分割的方式分類 14 2.4.2 基於門檻設定影像分割 14 2.4.3 基於區域影像分割 15 2.4.4 基於圖論與能量影像分割 15 2.4.5 基於深度資訊影像分割 18 第3章 光學穿透式頭戴顯示器影像渲染模式 22 3.1 研究目的 22 3.2 相關研究 23 3.3 研究設計 23 3.3.1 影像渲染參數設計 23 3.3.2 虛擬影像顯示位置校正 28 3.3.3 立體並排模式調整設計 29 3.3.4 實驗設備與實驗對象 30 3.3.5 實驗影像 31 3.4 實驗過程 32 3.4.1 實驗流程與環境參數 32 3.4.2 實驗介面 33 3.5 實驗結果 34 3.5.1 影像渲染參數調整結果 34 3.5.2 影像統計分析與參數渲染模型化 39 3.6 研究小結 42 第4章 光學頭戴式顯示器於高光背景影像優化 43 4.1 研究目的 43 4.2 研究內容 44 4.2.1 不同光澤度物件模擬 44 4.2.2 高光澤影像偵測 44 4.2.3 動態範圍階調調整 45 4.2.4 影像渲染參數設計 46 4.3 研究驗證 47 4.3.1 驗證實體模型樣本 47 4.3.2 影像偵測高光澤 48 4.3.3 影像偵測高光輪廓與物理參數關係 49 4.4 研究設計 51 4.4.1 實驗設備與實驗對象 51 4.4.2 實驗影像 52 4.4.3 實驗流程與環境參數 54 4.5 實驗結果 55 4.5.1 影像渲染參數調整結果 55 4.5.2 高光澤輪廓與影像特性參數渲染模型化 60 4.6 研究小結 66 第5章 應用於頭戴顯示器之影像分割研究 67 5.1 研究目的 67 5.2 研究內容 67 5.2.1 單台相機影像擷取與立體對齊 68 5.2.2 基於深度圖前景分割 69 5.2.3 Grab Cut互動式影像分割 69 5.3 實驗過程 70 5.3.1 實驗影像 70 5.3.2 實驗流程 71 5.3.3 實驗結果 77 5.3.4 OST-HMD影像渲染模型 79 5.4 研究小結 80 第6章 結論與建議 81 6.1 研究結論 81 6.2 未來展望 82 參考文獻 83 附錄(一)Epson OST-HMD規格 87 附錄(二)3dsMax參數設定 88 附錄(三)OST-HMD階調量測數據 89 附錄(四) OST-HMD色域大小 90

    [1] Management Association, Information Resources. Virtual and Augmented Reality: Concepts, Methodologies, Tools, and Applications. Engineering Science Reference. USA. 2018
    [2] J. P.Rolland, R. L.Holloway, H.Fuchs, andC.Hill, “A comparison of optical and video see-through head-mounted displays.” SPIE, 1994.
    [3] D.Cheng, Y.Wang, H.Hua, andJ.Sasian, “Design of a wide-angle, lightweight head-mounted display using free-form optics tiling.”, Opt. Lett., 2011.
    [4] 姜洋,孫強,谷立山,劉英,李淳, &王健.“折/衍混合自由曲面式頭戴顯示器光學系統設計.”, 光學精密工程,19(3), 508-514. 2011
    [5] ROLLAND, Jannick P. Wide-angle, off-axis, see-through head-mounted display. Optical engineering, 2000, 39.7: 1760-1768.
    [6] L.Wei, Y.Li, J.Jing, L.Feng, and J.Zhou, “Design and fabrication of a compact off-axis see-through head-mounted display using a freeform surface,” Opt. Express, 2018.
    [7] Cheng, Dewen, et al. “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism.” Applied optics 48.14 (2009): 2655-2668.
    [8] K. H.Lee, J. W.Kim, andJ. O.Kim, “Visibility enhancement via optimal gamma tone mapping for OST displays under ambient light,” in Proceedings - International Conference on Image Processing, ICIP, 2018.
    [9] Ryu, Je-Ho, et al. “Colorimetric background estimation for color blending reduction of OST-HMD.” Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2016 Asia-Pacific. IEEE, 2016.
    [10] Kim, Jae-Woo, et al. “[POSTER] Optimizing Background Subtraction for OST-HMD.” Mixed and Augmented Reality (ISMAR-Adjunct), 2017 IEEE International Symposium on. IEEE, 2017.
    [11] 簡忻蘋, 孫沛立, “透明顯示器在不同照明環境下之影像優化模式”, 國立台灣科技大學, 2017
    [12] Q. L Wu, P. L Sun, H. S Chen, R. Luo “Vision Assessment of OLED Wall-paper Display and Curved Quantum-dot LCD”, IDW’17, Japan, Sendai, 2017.
    [13] FPDM, VESA, and Video Electronics Standards Association. “Flat Panel Display Measurements Standard.”, 2005.
    [14] Downen, Phil. “A closer look at flat-panel-display measurement standards and trends.” Information Display 22.1 (2006): 16-21.
    [15] Oshima, Kosei, et al. “79‐3: Eyewear Display Measurement Method: Entrance Pupil Size Dependence in Measurement Equipment.” SID Symposium Digest of Technical Papers. Vol. 47. No. 1. 2016.
    [16] Y.Kwak, Y.Baek, andJ.Kim, “Optimal tone curve characteristics of transparent display for preferred image reproduction,” Dig. Tech. Pap. - SID Int. Symp., 2014.
    [17] Kwak, Youngshin, et al. “15.3: Optimal Monitor Gamma of Transparent Display.” SID Symposium Digest of Technical Papers. Vol. 46. No. 1. 2015.
    [18] N. R.Pal andS. K.Pal, “A review on image segmentation techniques,” Pattern Recognit., 1993.
    [19] 岡薩雷茲, Gonzalez, R. C., Miaou, S. G., Woods, R. E., {u20121} 德斯 (Woods, Richard E.), 繆紹綱. 數位影像處理. 臺灣培生教育教育出版 ,2008
    [20] Lei, Bo, and Jiu-lun Fan. “A Gradient Weighted Thresholding Method for Image Segmentation.” International Conference on Intelligent Science and Big Data Engineering. Springer, Cham, 2015.
    [21] Otsu, Nobuyuki. “A threshold selection method from gray-level histograms.” IEEE transactions on systems, man, and cybernetics 9.1 (1979): 62-66.
    [22] Kapur, Jagat Narain, Prasanna K. Sahoo, and Andrew KC Wong. “A new method for gray-level picture thresholding using the entropy of the histogram.” Computer vision, graphics, and image processing 29.3 (1985): 273-285.
    [23] Zhu, Ningbo, et al. “A fast 2d otsu thresholding algorithm based on improved histogram.” Pattern Recognition, 2009. CCPR 2009. Chinese Conference on. IEEE, 2009.
    [24] Kapur, Jagat Narain, and Hiremaglur K. Kesavan. “Entropy optimization principles and their applications.” Entropy and energy dissipation in water resources. Springer, Dordrecht, 1992. 3-20.
    [25] V.Kolmogorov andR.Zabih, “What Energy Functions Can Be Minimized via Graph Cuts?,” IEEE Trans. Pattern Anal. Mach. Intell., 2004.
    [26] Boykov, Yuri, and Vladimir Kolmogorov. “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision.” IEEE transactions on pattern analysis and machine intelligence 26.9 (2004): 1124-1137.
    [27] Boykov, Yuri, Olga Veksler, and Ramin Zabih. “Fast approximate energy minimization via graph cuts.” IEEE Transactions on pattern analysis and machine intelligence 23.11 (2001): 1222-1239.
    [28] Boykov, Yuri Y., and M-P. Jolly. “Interactive graph cuts for optimal boundary & region segmentation of objects in ND images.” Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on. Vol. 1. IEEE, 2001.
    [29] Rother, Carsten, Vladimir Kolmogorov, and Andrew Blake. “Grabcut: Interactive foreground extraction using iterated graph cuts.” ACM transactions on graphics (TOG). Vol. 23. No. 3. ACM, 2004.
    [30] Q.Wang, Z.Yu, C.Rasmussen, andJ.Yu, “Stereo Vision based Depth of Field Rendering on a Mobile Device,” J. Electron. Imaging, 2014.
    [31] Liu, Fayao, Chunhua Shen, and Guosheng Lin. “Deep convolutional neural fields for depth estimation from a single image.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
    [32] D.Scharstein, R.Szeliski, andR.Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” in Proceedings - IEEE Workshop on Stereo and Multi-Baseline Vision, SMBV 2001, 2001.
    [33] Hirschmuller, Heiko, and Daniel Scharstein. “Evaluation of cost functions for stereo matching.” Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on. IEEE, 2007.
    [34] F.Tombari, S.Mattoccia, L.DiStefano, andE.Addimanda, “Classification and evaluation of cost aggregation methods for stereo correspondence,” in 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2008.
    [35] Veksler, Olga. “Fast variable window for stereo correspondence using integral images.” Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on. Vol. 1. IEEE, 2003.
    [36] Yoon, Kuk-Jin, and In So Kweon. “Adaptive support-weight approach for correspondence search.” IEEE Transactions on Pattern Analysis and Machine Intelligence 28.4 (2006): 650-656.
    [37] Yang, Qingxiong. “A non-local cost aggregation method for stereo matching.” Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012.
    [38] H.Hirschmüller, “Stereo processing by semiglobal matching and mutual information” IEEE Trans. Pattern Anal. Mach. Intell., 2008.
    [39] H.P. Chien and P.L. Sun “Preferred Background Lighting and Tone Reproduction Curves of See-Through Displays”, Proc. IDW'16, pp.953-956, 2016.
    [40] Rong, L., J. J. Koenderink, and A. M. L. Kappers. “Surface roughness from highlight structure.” Applied Optics 37 (1999): 2886-2894.
    [41] L.Qi, M. J.Chantler, J. P.Siebert, andJ.Dong, “The joint effect of mesoscale and microscale roughness on perceived gloss,” Vision Res., 2015.
    [42]史国凯, et al. “基于分割的离焦图像深度图提取方法.” 液晶与显示 27.2 (2012): 229.
    [43] Prazdny, K. “Egomotion and relative depth map from optical flow.” Biological cybernetics 36.2 (1980): 87-102.
    [44] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. “Surf: Speeded up robust features.” European conference on computer vision. Springer, Berlin, Heidelberg, 2006.
    [45] Chien, Hsiang-Jen, et al. “When to use what feature? SIFT, SURF, ORB, or A-KAZE features for monocular visual odometry.” Image and Vision Computing New Zealand (IVCNZ), 2016 International Conference on. IEEE, 2016.
    [46] K.Peng, X.Chen, D.Zhou, and Y.Liu, “3D reconstruction based on SIFT and Harris feature points,” in 2009 IEEE International Conference on Robotics and Biomimetics, ROBIO 2009, 2009.
    [47] Hartley, Richard, Rajiv Gupta, and Tom Chang. “Stereo from uncalibrated cameras.” Computer Vision and Pattern Recognition, 1992. Proceedings CVPR'92., 1992 IEEE Computer Society Conference on. IEEE, 1992.
    [48] M.Kass, A.Witkin, and D.Terzopoulos, “Snakes: Active contour models,” Int. J. Comput. Vis., 1988.

    QR CODE