簡易檢索 / 詳目顯示

研究生: 潘子涵
Tzu-Han Pan
論文名稱: 基於座標擴增優化DL-GSA實現可巡弋全像影像
A Cruising Holographic Image Method Using Deep Learning GS Algorithm with Data Coordinate Augmentation
指導教授: 陳建宇
Chien-Yu Chen
口試委員: 陳建宇
Chien-Yu Chen
胡國瑞
Kuo-Jui Hu
林晃巖
Hoang-Yan Lin
張軒庭
Hsuan-Ting Chang
學位類別: 碩士
Master
系所名稱: 應用科技學院 - 色彩與照明科技研究所
Graduate Institute of Color and Illumination Technology
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 56
中文關鍵詞: 電腦全像術深度學習資料擴增全像AR-HUD優化型Gerchberg-Saxton演算法全像投影
外文關鍵詞: Computer-generated holography, Deep learning, Data augmentation, Holographic AR-HUD, Modified Gerchberg-Saxton algorithm, Holographic projection
相關次數: 點閱:741下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

全像擴增實境抬頭顯示器(Augmented Reality-Head-Up Display, AR-HUD)有著能縮小系統體積、更長的成像距離、簡單的光學結構等優點,且不會對眼睛造成疲勞或不適,是AR-HUD的解決方案之一。但電腦全像術要應用在AR-HUD時存在著運算速度、影像品質、動態影像位移的問題。本論文提出一種使用機器學習加速電腦全像圖的生成,並可座標化目標影像的電腦全像術——基於座標擴增的優化DL-GSA (Deep learning GS algorithm, DL-GSA),除了可以讓電腦全像圖的生成時間縮短到實時等級的21毫秒,還能客製化目標影像的空間座標,產出可動態位移的目標影像及其電腦全像圖,本論文也針對模型訓練所使用的訓練資料集做優化,改善資料分布的重疊情形,達到降低雜訊、提高影像品質的效果。在模擬重建中,峰值訊噪比(Peak-signal to noise ratio, PSNR)可提高到31dB,結構相似性指標(Structure similarity index, SSIM index)可達到0.83。在光學實驗中,全像影像的雷射光斑對比度(Speckle contrast, SC)皆小於9%。


The holographic augmented reality-head-up display (AR-HUD) has advantages such as reducing system volume, longer imaging distance, simple optical structure, and it does not cause eye fatigue or discomfort, making it one of the solutions for AR-HUD. However, when applying computer-generated holography to AR-HUD, there are issues with computational speed, image quality, and dynamic image shift. This thesis proposes a cruising holographic image method using deep learning Gerchberg–Saxton (GS) algorithm with data coordinate augmentation, that uses machine learning to accelerate the generation of computer-generated holograms and allows for coordinate-based manipulation of target images. With this approach, the generation time of computer-generated holograms can be reduced to real-time level, achieving a speed of 21 milliseconds, and dynamic shift of target images and their corresponding computer-generated holograms can be obtained. We also modify the training dataset used for model training, improving the uniformity of data distribution and reducing noise, thereby enhancing image quality. In the simulation reconstruction, the peak signal-to-noise ratio (PSNR) can be improved to 31 dB, and the structural similarity index (SSIM index) can reach 0.83. In the optical experiment, the speckle contrast (SC) values of the holographic images are all less than 9%.

摘要 I Abstract II 致謝 III 目錄 IV 表目錄 V 圖目錄 VI 第1章、 緒論 1 1.1 研究動機與目的 1 1.2 電腦全像術發展回顧 2 1.3 電腦全像術在車用抬頭顯示器的應用 3 1.4 論文架構 7 第2章、 座標擴增優化DL-GSA 9 2.1 DL-GSA 10 2.2 目標影像二維座標定位 12 2.3 資料擴增與訓練資料集優化 13 2.4 多深度訓練流程 17 第3章、 全像AR-HUD驗證與實現 19 3.1 光學系統架構 19 3.2 電腦模擬重建結果 21 3.3 光學重建結果 23 3.4 影像品質評估 27 第4章、 討論 30 第5章、 結論與未來展望 40 5.1 結論 40 5.2 未來展望 40 參考文獻 42

[1] S.Chan-Edmiston, S.Fischer, S.Sloan, M.Wong, V. N. T. S.Center, and D. of Transportation, “Intelligent Transportation Systems (ITS) Joint Program Office: Strategic Plan 2020–2025,” p. 49p, 2020, [Online]. Available: https://rosap.ntl.bts.gov/view/dot/63263%0Ahttps://trid.trb.org/view/2010098
[2] Y. J.Kim and H. S.Yoo, “Analysis of User Preference of AR Head-Up Display Using Attrakdiff BT - Intelligent Human Computer Interaction,” M.Singh, D.K.Kang, J.H.Lee, U. S.Tiwary, D.Singh, andW.Y.Chung, Eds., Cham: Springer International Publishing, 2021, pp. 335–345.
[3] K.Aoyama, K.Yokoyama, T.Yano, and Y.Nakahata, “Eye-sensing light field display for spatial reality reproduction,” Dig. Tech. Pap. - SID Int. Symp., vol. 52, no. 1, pp. 669–672, 2021, doi: 10.1002/sdtp.14771.
[4] T.Balogh, P. T.Kovács, and Z.Megyesi, “HoloVizio 3D Display System,” ImmersCom 2007 - Proc. 1st Int. Conf. Immersive Telecommun., pp. 3–6, 2007, doi: 10.4108/ICST.IMMERSCOM2007.2145.
[5] L.Yang, H.Dong, A.Alelaiwi, and A.ElSaddik, “See in 3D: state of the art of 3D display technologies,” Multimed. Tools Appl., vol. 75, no. 24, pp. 17121–17155, 2016, doi: 10.1007/s11042-015-2981-y.
[6] J.Geng, “Volumetric 3D display for radiation therapy planning,” IEEE/OSA J. Disp. Technol., vol. 4, no. 4, pp. 437–450, 2008, doi: 10.1109/JDT.2008.922413.
[7] G. E.Favalora, “Volumetric 3D displays and application infrastructure,” Computer (Long. Beach. Calif)., vol. 38, no. 8, pp. 37–44, 2005, doi: 10.1109/MC.2005.276.
[8] D. M.Hoffman, A. R.Girshick, and M. S.Banks, “Vergence – accommodation con fl icts hinder visual performance and cause visual fatigue,” vol. 8, pp. 1–30, 2008, doi: 10.1167/8.3.33.Introduction.
[9] T.Shibata, J.Kim, D. M.Hoffman, and M. S.Banks, “Visual discomfort with stereo displays: effects of viewing distance and direction of vergence-accommodation conflict,” Stereosc. Displays Appl. XXII, vol. 7863, no. February 2011, p. 78630P, 2011, doi: 10.1117/12.872347.
[10] N.Takai and Y.Mifune, “Digital watermarking by a holographic technique,” Appl. Opt., vol. 41, no. 5, p. 865, 2002, doi: 10.1364/ao.41.000865.
[11] Y.Ding, D. D.Nolte, M. R.Melloch, and A. M.Weiner, “Time-domain image processing using dynamic holography,” IEEE J. Sel. Top. Quantum Electron., vol. 4, no. 2, pp. 332–340, 1998, doi: 10.1109/2944.686739.
[12] A.Alfalou and C.Brosseau, “Optical image compression and encryption methods,” Adv. Opt. Photonics, vol. 1, no. 3, p. 589, 2009, doi: 10.1364/aop.1.000589.
[13] B.Javidi, “Compression of encrypted three-dimensional objects using digital holography,” Opt. Eng., vol. 43, no. 10, p. 2233, 2004, doi: 10.1117/1.1783280.
[14] M.Lucente, “Interactive Three-dimensional Holographic Displays : Seeing the Future in Depth ~ FTd,” no. May, 1997, doi: 10.1145/271283.271312.
[15] C. W.Christenson et al., “Materials for an updatable holographic 3D display,” IEEE/OSA J. Disp. Technol., vol. 6, no. 10, pp. 510–516, 2010, doi: 10.1109/JDT.2010.2046620.
[16] A.Elmorshidy, “Holographic Projection Technology: The World is Changing,” vol. 2, no. 2, 2010, [Online]. Available: http://arxiv.org/abs/1006.0846
[17] E.Bruckheimer et al., “Computer-generated real-time digital holography: First time use in clinical medical imaging,” Eur. Heart J. Cardiovasc. Imaging, vol. 17, no. 8, pp. 845–849, 2016, doi: 10.1093/ehjci/jew087.
[18] M.Paturzo et al., “Digital Holography, a metrological tool for quantitative analysis: Trends and future applications,” Opt. Lasers Eng., vol. 104, no. November 2017, pp. 32–47, 2018, doi: 10.1016/j.optlaseng.2017.11.013.
[19] G. R.W., “A practical algorithm for the determination of plane from image and diffraction pictures,” Optik (Stuttg)., vol. 35, no. 2, pp. 237–246, 1972, Accessed: Jun.13, 2023. [Online]. Available: https://cir.nii.ac.jp/crid/1572261550522209664.bib?lang=en
[20] H.E.Hwang, H. T.Chang, and W.-N.Lie, “Fast double-phase retrieval in Fresnel domain using modified Gerchberg-Saxton algorithm for lensless optical security systems,” Opt. Express, vol. 17, no. 16, pp. 13700–13710, 2009, doi: 10.1364/OE.17.013700.
[21] K.Matsushima, H.Schimmel, and F.Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A, vol. 20, no. 9, p. 1755, 2003, doi: 10.1364/josaa.20.001755.
[22] K.Matsushima and T.Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express, vol. 17, no. 22, p. 19662, 2009, doi: 10.1364/oe.17.019662.
[23] R.Horisaki, R.Takagi, and J.Tanida, “Deep-learning-generated holography,” Appl. Opt., vol. 57, no. 14, pp. 3859–3863, 2018, doi: 10.1364/AO.57.003859.
[24] I.Moon, K.Jaferzadeh, Y.Kim, and B.Javidi, “Noise-free quantitative phase imaging in Gabor holography with conditional generative adversarial network,” Opt. Express, vol. 28, no. 18, p. 26284, 2020, doi: 10.1364/oe.398528.
[25] J.W.Kang, B.S.Park, J.K.Kim, D.W.Kim, and Y.H.Seo, “Deep-learning-based hologram generation using a generative model,” Appl. Opt., vol. 60, no. 24, pp. 7391–7399, 2021, doi: 10.1364/AO.427262.
[26] A.Khan, Z.Zhijiang, Y.Yu, M. A.Khan, K.Yan, and K.Aziz, “GAN-Holo: Generative Adversarial Networks-Based Generated Holography Using Deep Learning,” Complexity, vol. 2021, 2021, doi: 10.1155/2021/6662161.
[27] Y.Ishii, T.Shimobaba, D.Blinder, T.Birnbaum, and P.Schelkens, “Optimization of phase ‑ only holograms calculated with scaled diffraction calculation through deep neural networks,” Appl. Phys. B, vol. 128, no. 2, pp. 1–11, 2022, doi: 10.1007/s00340-022-07753-7.
[28] X.Sun et al., “Dual-task convolutional neural network based on the combination of the U-Net and a diffraction propagation model for phase hologram design with suppressed speckle noise,” Opt. Express, vol. 30, no. 2, pp. 2646–2658, 2022, doi: 10.1364/OE.440956.
[29] A.Yolalmaz and E.Yüce, “Comprehensive deep learning model for 3D color holography,” Sci. Rep., no. 0123456789, pp. 1–9, 2022, doi: 10.1038/s41598-022-06190-y.
[30] D.Beck, J.Jung, J.Park, and W.Park, “A Study on User Experience of Automotive HUD Systems: Contexts of Information Use and User-Perceived Design Improvement Points,” Int. J. Hum. Comput. Interact., vol. 35, no. 20, pp. 1936–1946, 2019, doi: 10.1080/10447318.2019.1587857.
[31] J.Pullukat, S.Tanaka, and J.Jiang, “P-25: Effects of Image Distance on Cognitive Tunneling with Augmented Reality Head Up Displays,” SID Symp. Dig. Tech. Pap., vol. 51, no. 1, pp. 1427–1430, Aug.2020, doi: https://doi.org/10.1002/sdtp.14155.
[32] Y.Shin, Y.Jiang, Q.Wang, Z.Zhou, G.Qin, and D.K.Yang, “Flexoelectric-effect-based light waveguide liquid crystal display for transparent display,” Photonics Res., vol. 10, no. 2, p. 407, 2022, doi: 10.1364/prj.426780.
[33] N.Ledentsov, V. A.Shchukin, I. E.Titkov, N. N.Ledentsov, and U. D.Zeitner, “Hyperchromatic multifocal 3D display for augmented reality applications,” vol. 1193104, no. March 2022, p. 15, 2022, doi: 10.1117/12.2612340.
[34] R.Fan et al., “Automated design of freeform imaging systems for automotive heads-up display applications,” Opt. Express, vol. 31, no. 6, p. 10758, 2023, doi: 10.1364/oe.484777.
[35] W.Wang, X.Zhu, K.Chan, and P.Tsang, “Digital Holographic System for Automotive Augmented Reality Head-Up-Display,” IEEE Int. Symp. Ind. Electron., vol. 2018-June, no. 1, pp. 1327–1330, 2018, doi: 10.1109/ISIE.2018.8433601.
[36] Lv Zhenlv, Xu Yuan, Yang Yan, and Liu Juan, “Multiplane holographic augmented reality head-up display with a real – virtual dual mode and large eyebox,” Appl. Opt., vol. 61, no. 33, pp. 9962–9971, 2022.
[37] D. P.Kingma and J.Ba, “Adam: A method for stochastic optimization,” arXiv Prepr. arXiv1412.6980, 2014.
[38] O.Ronneberger, P.Fischer, and T.Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer, 2015, pp. 234–241.
[39] C.H.Chuang, C.Y.Chen, S.T.Li, H.T.Chang, and H.Y.Lin, “Miniaturization and image optimization of a full-color holographic display system using a vibrating light guide,” Opt. Express, vol. 30, no. 23, p. 42129, 2022, doi: 10.1364/oe.473150.
[40] S. T.Welstead, Fractal and wavelet image compression techniques, vol. 40. Spie Press, 1999.
[41] Z.Wang, A. C.Bovik, H. R.Sheikh, and E. P.Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004, doi: 10.1109/TIP.2003.819861.
[42] J. W.Goodman, “Statistical Properties of Laser Speckle Patterns BT - Laser Speckle and Related Phenomena,” J. C.Dainty, Ed., Berlin, Heidelberg: Springer Berlin Heidelberg, 1975, pp. 9–75. doi: 10.1007/978-3-662-43205-1_2.
[43] F.Riechert, G.Bastian, and U.Lemmer, “Laser speckle reduction via colloidal-dispersion-filled projection screens,” Appl. Opt., vol. 48, no. 19, pp. 3742–3749, 2009, doi: 10.1364/AO.48.003742.
[44] A.Ziebinski, R.Cupek, H.Erdogan, and S.Waechter, “A Survey of ADAS Technologies for the Future Perspective of Sensor Fusion BT - Computational Collective Intelligence,” N. T.Nguyen, L.Iliadis, Y.Manolopoulos, and B.Trawiński, Eds., Cham: Springer International Publishing, 2016, pp. 135–146.
[45] L.Li, D.Wen, N. N.Zheng, and L. C.Shen, “Cognitive cars: A new frontier for ADAS research,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 1, pp. 395–407, 2012, doi: 10.1109/TITS.2011.2159493.
[46] A.Ziebinski, R.Cupek, D.Grzechca, and L.Chruszczyk, “Review of advanced driver assistance systems (ADAS),” AIP Conf. Proc., vol. 1906, 2017, doi: 10.1063/1.5012394.
[47] 鄭晴文。「深度學習應用於全像投影之研究」。碩士論文,國立臺灣科技大學色彩與照明科技研究所,2022。https://hdl.handle.net/11296/2ebc2m。
[48] C.W.Cheng, T.A.Chou, C.H.Chuang, T.H.Pan, and C.Y.Chen, “High Speed Computer Generated Holography Using Convolutional Neural Networks,” Poster session presented at: Optics & Photonics Taiwan International Conference (OPTIC), Zhongli, Taiwan, 2022.

無法下載圖示 全文公開日期 2033/08/18 (校內網路)
全文公開日期 2073/08/18 (校外網路)
全文公開日期 2073/08/18 (國家圖書館:臺灣博碩士論文系統)
QR CODE