簡易檢索 / 詳目顯示

研究生: 張欣媛
Hsin-Yuan Chang
論文名稱: 應用簡化立體光度法於肖像光影重建系統
Portrait Image Relighting Based on Simplified Stereo Photometry
指導教授: 林宗翰
Tzung-Han Lin
口試委員: 羅梅君
Mei-Chun Lo
陳鴻興
Hung-Shing Chen
孫沛立
Pei-Li Sun
學位類別: 碩士
Master
系所名稱: 應用科技學院 - 色彩與照明科技研究所
Graduate Institute of Color and Illumination Technology
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 91
中文關鍵詞: 肖像照光影重建渲染法向量貼圖立體光度法
外文關鍵詞: Portrait, Relighting, Render, Normal Map, Photometric Stereo
相關次數: 點閱:256下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

隨著科技的發展,智慧型手機鏡頭、軟硬體配置的提升與普及,攝影、拍照已日漸成為大眾日常生活的一部分。但對於修飾照片、影像融合的光影問題在學術界、影視產業…等領域一直是重要課題。為解決此類問題,學界與業界皆已投入大量人力,並已有顯著改善結果,但耗費成本過於龐大,僅用於實驗研究與專業用途,並非普通民眾能夠接觸並負擔。
對此本研究提出一套基於真實物理反射特性的肖像光影重建系統,以一台數位單眼相機搭配五盞燈之簡化硬體架構進行影像擷取及運算,獲得漫反射貼圖及法向量貼圖,再透過光影重建軟體進行呈現。此架構適合用於如大頭貼機、拍照機等簡易攝影棚。
本研究針對肖像攝影開發一套光源重建系統,分別針對硬體及軟體個別進行研究。在硬體建立部分,本研究以理想球體作為校正基準,並採用簡化的立體光度法計算球體法向量。接著利用不同曝光條件組合取得誤差最小的法向量貼圖之拍攝條件,使得拍照時的整體法向量平均誤差低於8.04度。在軟體部分,利用硬體拍攝之肖像照片及法向量貼圖,以單一光源模擬各種配置角度,進行肖像影像光影重建之渲染。在效能評估方面,本研究針對肖像表情、化妝以及光源角度等條件進行24位成年人的人因實驗,並與Adobe公司的生成DPR的技術進行一對一的立體感比較。實驗結果顯示,肖像表情、是否化妝以及光源角度皆會影響選擇結果。本研究在實驗樣本中48.9%優於Adobe公司生成DPR的技術。其中,在實驗室環境的實驗問卷中,本研究在實驗樣本中66.25%優於Adobe公司生成DPR的技術;在遠端使用線上實驗問卷中,本研究在實驗樣本中40.25%優於Adobe公司生成DPR的技術。


With the development of technology, the smartphone which equips high-performance cameras and processors for shooting quality photos has become a significant tool for human life. However, the lighting issue in beautifying photos and photo fusion is still an important topic in academic research and industry. To solve this problem, scholars and engineers spend a lot of resources to overcome these defects. As a result, the improvement is significant, but the price of professional equipment is not affordable to the general public.
To this end, this thesis proposes a portrait photo relighting system based on physical reflection properties. We utilize a controllable DSLR camera and specific 5 light sources to capture portrait images and to obtain diffuse and normal maps. Then, a software algorithm was proposed for environmental lighting and shadow reconstruction. This architecture is suitable for photo studios such as photo sticker vending machines and photo booths.
In this study, our portrait photo relighting system involves hardware design and software algorithms. In hardware design, to improve the accuracy in normal map estimation, we utilized a white-coated sphere and a simplified stereo photometric method to calibrate the system. We collected and compare the estimated normal maps among different exposure conditions. Finally, a proper shoot parameter was obtained to minimize the error of the normal map. The overall error was as small as 8.04 degrees. In developing relighting software, we imported the diffusion and normal images that were obtained by the proposed device into a virtual graphic environment to simulate a target under one single light source with different positions. To evaluate the performance, we simulated different conditions including facial expressions, makeup faces, and several lighting directions. Totally, 24 adult subjects were recruited to conduct human factor experiments to compare the performance between the proposed method and DPR software of Adobe Inc. The result shows the factors of facial expression and makeup face will significantly affect the favoritisms. In the overall evaluation of favor image selection, the proposed method performed better than the Adobe DPR method among 48.9% of images. We divided subjective questionnaires into two groups: one-third of them were carried under a specific laboratory environment, and two-third were via online pre-rendering contents. In the laboratory environment, the proposed method performed better than the Adobe DPR method among 66.25% of images. In online questionnaires, the proposed method performed better among 40.25% of images.

摘要 I Abstract II 誌謝 IV 目錄 V 圖目錄 VIII 表目錄 XII 第1章 緒論 1 1.1 研究背景 1 1.2 研究動機與目的 3 1.3 論文架構 6 第2章 文獻探討 7 2.1 物理基礎渲染貼圖 7 2.1.1 漫反射貼圖 8 2.1.2 法向量貼圖 9 2.1.3 凹凸貼圖 12 2.1.4 位移貼圖 13 2.1.5 材質物理性質 13 2.2 光影重建 14 2.2.1 影像處理方法 14 2.2.2 立體光度法 16 2.2.3 人工智慧方法 19 2.3 人像攝影要件 23 2.3.1 燈光 23 2.3.2 濾鏡片 25 2.3.3 拍攝架構幾何 27 第3章 光影重建系統 28 3.1 取像系統設計(硬體) 28 3.1.1 硬體架構描述 28 3.1.2 硬體取像流程 30 3.1.3 生成法向量貼圖流程 31 3.2 光影重建渲染軟體設計 35 3.2.1 光影重建渲染軟體流程 35 3.2.2 光影重建渲染參數設置 37 第4章 實驗設計 40 4.1 法向量貼圖實驗設計 40 4.1.1 實驗目的 40 4.1.2 實驗設備 40 4.1.3 實驗參數 41 4.2 光影重建實驗設計 42 4.2.1 實驗目的 42 4.2.2 比較對象 42 4.2.3 實驗問卷製作 43 4.2.4 實驗設計 45 4.2.5 實驗設備 46 4.2.6 實驗對象 47 4.2.7 實驗流程 48 第5章 實驗結果與討論 49 5.1 法向量貼圖實驗結果 49 5.2 光影重建結果 52 5.3 光影重建渲染人因實驗結果 55 第6章 結論及未來研究方向 60 參考文獻 61 附錄 65

[1] T. Pereira, E. V. Brazil, I. MacÊdo, M. C. Sousa, L. H. de Figueiredo, and L. Velho, “Sketch-based warping of RGBN images,” Graphical Models, vol. 73, no. 4, 2011, doi: 10.1016/j.gmod.2010.11.001.
[2] R. OʼSullivan, L. M. Tom, V. Y. Bunya, W. C. Nyberg, M. Massaro-Giordano, E. Daniel, E. Smith, D. H. Brainard, J. Gee, M. G. Maguire, R. A. Stone, “Use of crossed polarizers to enhance images of the eyelids,” Cornea, vol. 36, no. 5, pp. 631–635, 2017, doi: 10.1097/ICO.0000000000001157.
[3] J. Sun and P. Perona, “Where is the sun?,” Nature Neuroscience, vol. 1, no. 3, 1998, doi: 10.1038/630.
[4] I. C. McManus, J. Buckman, and E. Woolley, “Is light in pictures presumed to come from the left side?,” Perception, vol. 33, no. 12, 2004, doi: 10.1068/p5289.
[5] J. F. Blinn, “SIMULATION OF WRINKLED SURFACES.,” in Comput Graph (ACM), 1978, vol. 12, no. 3, doi: 10.1145/965139.507101.
[6] M. Okabe, G. Zeng, Y. Matsushita, T. Igarashi, L. Quan, and H. Y. Shum, “Single-view relighting with normal map painting,” Computer Graphics Forum (Proc. Pacific Graphics), pp. 27–34, 2006.
[7] T.-P. Wu, C.-K. Tang, M. S. Brown, and H.-Y. Shum, “ShapePalettes: Interactive normal transfer via sketching,” ACM Trans. Graph., vol. 26, no. 3, p. 44, 2007.
[8] D. Sýkora, L. Kavan, M. Čadík, O. Jamriška, A. Jacobson, B. Whited, M. Simmons, O. Sorkine-Hornung, “Ink-and-ray: Bas-relief meshes for adding global illumination effects to hand-drawn characters,” ACM Trans. Graph., vol. 33, no. 2, pp. 1–15, 2014.
[9] F. Solomon and K. Ikeuchi, “Extracting the shape and roughness of specular lobe objects using four light photometric stereo,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1992, vol. 1992-June, pp. 466–471. doi: 10.1109/CVPR.1992.223149.
[10] A. el Gendy and A. Shalaby, “Mean profile depth of pavement surface macrotexture using photometric stereo techniques,” Journal of Transportation Engineering, vol. 133, no. 7, 2007, doi: 10.1061/(ASCE)0733-947X(2007)133:7(433).
[11] J. Sun, M. Smith, L. Smith, S. Midha, and J. Bamber, “Object surface recovery using a multi-light photometric stereo technique for non-Lambertian surfaces subject to shadows and specularities,” Image and Vision Computing, vol. 25, no. 7, 2007, doi: 10.1016/j.imavis.2006.04.025.
[12] G. A. Atkinson, M. F. Hansen, M. L. Smith, and L. N. Smith, “A efficient and practical 3D face scanner using near infrared and visible photometric stereo,” in Procedia Computer Science, 2010, vol. 2. doi: 10.1016/j.procs.2010.11.003.
[13] A. Jones, G. Fyffe, X. Yu, W.-C. Ma, J. Busch, R. Ichikari, M. Bolas, and P. Debevec, “Head-mounted photometric stereo for performance capture,” 2011. doi: 10.1109/CVMP.2011.24.
[14] A. Ghosh, G. Fyffe, B. Tunwattanapong, J. Busch, X. Yu, and P. Debevec, “Multiview Face Capture using Polarized Spherical Gradient Illumination,” ACM Transactions on Graphics, vol. 30, no. 6, 2011, doi: 10.1145/2070781.2024163.
[15] J. Riviere, P. Gotardo, D. Bradley, A. Ghosh, and T. Beeler, “Single-shot high-quality facial geometry and skin appearance capture,” ACM Transactions on Graphics, vol. 39, no. 4, 2020, doi: 10.1145/3386569.3392464.
[16] P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’00, 2000, pp. 145–156. doi: 10.1145/344779.344855.
[17] P. Debevec, A. Wenger, C. Tchou, A. Gardner, J. Waese, and T. Hawkins, “A lighting reproduction approach to live-action compositing,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 547–556, Jul. 2002, doi: 10.1145/566654.566614.
[18] D. Vlasic, P. Peers, I. Baran, P. Debevec, J. Popović, S. Rusinkiewicz, and W. Matusik, “Dynamic shape capture using multi-view photometric stereo,” in ACM SIGGRAPH Asia 2009 papers on - SIGGRAPH Asia ’09, 2009, vol. 28, no. 5, p. 1. doi: 10.1145/1661412.1618520.
[19] K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Transactions on Graphics, vol. 38, no. 6, Nov. 2019, doi: 10.1145/3355089.3356571.
[20] H. Zhou, S. Hadap, K. Sunkavalli, and D. Jacobs, “Deep single-image portrait relighting,” in Proceedings of the IEEE International Conference on Computer Vision, Oct. 2019, vol. 2019-October, pp. 7193–7201. doi: 10.1109/ICCV.2019.00729.
[21] A. Shashua and T. Riklin-Raviv, “The quotient image: class-based re-rendering and recognition with varying illuminations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, 2001, doi: 10.1109/34.908964.
[22] A. Meka, C. Häne, R. Pandey, M. Zollhöfer, S. Fanello, G. Fyffe, A. Kowdle, X. Yu, J. Busch, J. Dourgarian, P. Denny, S. Bouaziz, P. Lincoln, M. Whalen, G. Harvey, J. Taylor, S. Izadi, A. Tagliasacchi, P. Debevec, C. Theobalt, J. Valentin, and C. Rhemann, “Deep reflectance fields: High-quality facial reflectance field inference from color gradient illumination,” ACM Transactions on Graphics, vol. 38, no. 4, 2019, doi: 10.1145/3306346.3323027.
[23] T. Sun, J. T. Barron, Y.-T. Tsai, Z. Xu, X. Yu, G. Fyffe, C. Rhemann, J. Busch, P. Debevec, and R. Ramamoorthi, “Single image portrait relighting,” ACM Transactions on Graphics, vol. 38, no. 4, Jul. 2019, doi: 10.1145/3306346.3323008.
[24] C. LeGendre, W.-C. Ma, G. Fyffe, J. Flynn, L. Charbonnel, J. Busch, and P. Debevec, “Deeplight: Learning illumination for unconstrained mobile mixed reality,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, vol. 2019-June. doi: 10.1109/CVPR.2019.00607.
[25] G. Chen, M. Waechter, B. Shi, K. Y. K. Wong, and Y. Matsushita, “What Is Learned in Deep Uncalibrated Photometric Stereo?,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, vol. 12359 LNCS. doi: 10.1007/978-3-030-58568-6_44.
[26] A. Meka, R. Pandey, C. Häne, S. Orts-Escolano, P. Barnum, P. David-Son, D. Erickson, Y. Zhang, J. Taylor, S. Bouaziz, C. Legendre, W.-C. Ma, R. Overbeck, T. Beeler, P. Debevec, S. Izadi, C. Theobalt, C. Rhemann, and S. Fanello, “Deep relightable textures: volumetric performance capture with neural rendering,” ACM Transactions on Graphics, vol. 39, no. 6, Nov. 2020, doi: 10.1145/3414685.3417814.
[27] C. LeGendre, W.-C. Ma, R. Pandey, S. Fanello, C. Rhemann, J. Dourgarian, J. Busch, and P. Debevec, “Learning Illumination from Diverse Portraits,” Dec. 2020. doi: 10.1145/3410700.3425432.
[28] N. Fancher, “Introduction Why hard Light?,” in Studio Anywhere 2: Hard Light: A Photographer's Guide to Shaping Hard Light. isbn: 9781681982267.
[29] S. T. McHugh, “Introduction to portrait lighting,” in Understanding Photography: Master Your Digital Camera and Capture That Perfect Photo. isbn: 9781593278953.
[30] S. T. McHugh, “Lens filter,” in Understanding Photography: Master Your Digital Camera and Capture That Perfect Photo. isbn: 9781593278953.
[31] H. Joe, K. C. Seo, and S. W. Lee, “Interactive Rembrandt lighting design,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2005, vol. 3767 LNCS. doi: 10.1007/11581772_30.
[32] R. Ball, C. Shu, P. Xi, M. Rioux, Y. Luximon, and J. Molenbroek, “A comparison between Chinese and Caucasian head shapes,” Applied Ergonomics, vol. 41, no. 6, 2010, doi: 10.1016/j.apergo.2010.02.002.

QR CODE