簡易檢索 / 詳目顯示

研究生: 張智惟
Chih-Wei Chang
論文名稱: 外觀驅動的茂密樹冠合成
Appearance-Driven Synthesis of Dense Foliage
指導教授: 賴祐吉
Yu-Chi Lai
口試委員: 賴祐吉
Yu-Chi Lai
陳怡伶
Yi-Ling Chen
林士勛
Shih-Syun Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 102
中文關鍵詞: 外觀驅動基於圖像代理合成視覺效果
外文關鍵詞: Appearance-Driven, Image-based, Proxy, Synthesis Visual Effect
相關次數: 點閱:151下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

在地景設計中,為了要在設計時,能夠直接檢視現實中場景的視覺效果,通常會利用空拍來重建實際場景。該選擇的優點在於,在實景上,編輯物件時,可以更準確的對應現實中的場景,也可以讓後續施作有更明確的依據,但是,在實景中,通常會有綠化場景的需求或是本身就具有大量的樹木,其中茂密樹葉所形成的茂密樹冠,其通常會做為整體景觀設計上的視覺參考,因此,並不需要表現精準的細節也不希望占用大量資源,但是,為了表示現實樹冠的中樹葉複雜的細節所產生的視覺效果,往往需要相當密集的照片才能重建,並且隨之而來的是耗費大量的記憶體。因此,為了在少量記憶體的使用下,有效合成現實樹冠的視覺效果,本論文提出一個以各視角樹外觀來驅動的樹合成方法,藉由觀察對於人來說樹重要的視覺效果,在保留重要的視覺效果下合成樹,並將相對人不敏感的效果藉由給定的規則甚至是隨機的方式來表示,本論文利用多個半橢球表示的樹叢(Lobe)來組成樹冠的代理(Proxy),利用不同視角的輪廓(Silhouette)來擬合每個樹叢(Lobe)以有效覆蓋樹冠範圍,並且,利用使用者給定的樹葉樣版(Exemplar)作為實例(Instance),藉由給定的規則在算繪時即時分布大量實例群(Instances),以節省紀錄實例(Instance)的幾何資訊所使用的記憶體。另外,藉由映射所有視角的照片至實例群(Instances),分析個別實例(Instance)映射結果為單一的代表顏色,以此作為實例(Instance)的光照,並且,將實例(Instance)分為數個樹叢(Lobe)紀錄所有視角的反射光,藉由加權平均樹叢(Lobe)在不同視角的反射光近似任意視角的反射光。在算繪時,每個實例(Instance)的著色(Shading)利用合成的實例(Instance)光照加上即時計算對應樹叢(Lobe)的反射光來合成,藉由兩者的近似與簡化,在使用相對少量的記憶體下,表現樹冠整體的光照效果。最後透過使用者研究,在大、中和小型樹的共六個場景中,讓受試者比較本論文與其他兩種使用較多記憶體的方法,並且,通過統計的方法來驗證本論文有最好的視覺效果,其中,兩種方法分別為基於樹枝的真實樹建模以及基於表面的空拍重建。


In the landscape design, in order to be able to directly view the visual effect of the scene in reality, usually will use aerial photography to reconstruct the actual scene. The advantage of this option is that it allows for more accurate editing of objects in the real scene, and also allows a clearer basis for the subsequent application, but in the real scene, there is usually a need for green scenes or a large number of trees, where the dense canopy formed by dense foliage, which is usually used as a visual reference for the overall landscape design. Therefore, it is not necessary to represent precise details and does not want to take up a lot of resources. However, in order to represent the visual effects generated by the complex details of the leaves in a realistic tree canopy, it is often necessary to reconstruct a fairly dense photo, and with it, a lot of memory is consumed. Therefore, in order to effectively synthesize the visual effects of realistic tree canopies with the use of a small amount of memory, this paper proposes a dense foliage synthesis method driven by the appearance of trees from all viewpoints, by observing the visual effects that are important to humans, synthesizing trees while preserving the important visual effects, and representing the effects that are relatively insensitive to humans by a given rule or even randomly. This paper uses multiple semi-ellipsoidal lobe to form the proxy of the canopy, and uses silhouette with different view angles to fit each Lobe to effectively cover the canopy area, and uses the exemplar given by the user as the instance, and distributes a large number of instances in real time by the given rules to save the memory used to record the geometric information of the Instance. In addition, by mapping the images of all view angles to the instances, we analyze the mapping results of individual instances into a single representative color as the illumination of the instance, and divide the instance into several lobes to record the reflected light of all view, and approximate the reflected light of any view by weighting the reflected light of the Lobe at different view angles. In rendering, the shading of each Instance is synthesized using the synthesized instance light and the reflected light of the corresponding bush calculated in real time. By approximating and simplifying the two, the overall lighting effect of the tree canopy can be expressed with relatively little memory. Finally, through a user study, subjects were asked to compare this paper with two other methods that use more memory in a total of six scene of large, medium and small trees, and this paper was statistically verified to have the best visual effect. The two methods are branch-based realistic tree modeling and surface-based aerial reconstruction, respectively.

目錄 論文摘要 Abstract 誌謝 目錄 圖目錄 表目錄 1介紹 1.1觀察 1.2問題定義 1.3主要貢獻 1.4論文架構 2相關研究 2.1基於樹枝的建模 2.1.1程序化重建 2.1.2基於幾何的提取 1.3基於圖片的建模 2.2基於表面的重建 2.3基於圖像的算繪 3方法總覽 4最佳化基於樹叢的代理 4.1基於樹叢的代理 4.1.1以半橢球型表示樹叢(Lobe) 4.2基因演算法 4.2.1染色體的定義 4.2.2初始化群體 4.2.3適應值函數 4.2.4交配 4.2.5突變 5產生樹葉實例群 5.1實例化 5.2修剪 6計算樹葉與樹叢的顏色 6.1基於圖像的紋理映射 6.2樹葉的漫反射 6.3樹叢的光照效果 7即時算繪合成樹冠 7.1即時生成實例群 7.2即時計算任意視角的反射光 7.3即時計算環境光遮蔽 8實驗結果與分析 8.1實驗結果 8.1.1結果分析 8.1.2結果比較 8.2用戶研究 8.2.1問卷設計 8.2.2統計分析 8.3評估結果 8.3.1顏色分布 .3.2清晰程度 8.3.3變形程度 8.3.4外型 .3.5輪廓 8.3.6結果分析 9結論與未來工作 參考文獻

[1]N. Snavely, S. Seitz, and R. Szeliski, “Modeling the world from internet photo col-lections,”International Journal of Computer Vision, vol. 80, pp. 189–210, 11 2008.
[2]Z. Zhu, C. Kleinn, and N. Noelke, “Assessing tree crown volume—a review,”Forestry: An International Journal of Forest Research, vol. 94, 10 2020.
[3]O. Deussen and T. Strothotte, “Computer-generated pen-and-ink illustration oftrees,”Computer Graphics, vol. 34, 10 2000.
[4]R.Cook,J.Halstead,M.Planck,andD.Ryu,“Stochasticsimplificationofaggregatedetail,”ACM Transactions on Graphics, vol. 26, p. 79, 07 2007.
[5]X.Wang,X.Huang,andH.Fu,“Acolor-texturesegmentationmethodtoextracttreeimage in complex scene,” pp. 621–625, 01 2010.
[6]F. R. F. J. L. Cárdenas-Donoso, C. J. Ogayar and J. M. Jurado, “Modeling of the 3dtree skeleton using real-world data: A survey,”IEEE Transactions on Visualizationand Computer Graphics, pp. 1–17, 2022.
[7]I. Shlyakhter, M. Rozenoer, J. Dorsey, and S. Teller, “Reconstructing 3d tree mod-els from instrumented photographs.,”IEEE Computer Graphics and Applications,vol. 21, pp. 53–61, 01 2001.
[8]C.-H. Teng and Y.-S. Chen, “Image-based tree modeling from a few images withvery narrow viewing range,”The Visual Computer, vol. 25, pp. 297–307, 04 2009.
[9]B.Li,J.Kałużny,J.Klein,D.Michels,W.Palubicki,B.Benes,andS.Pirk,“Learningto reconstruct botanical trees from single images,”ACM Transactions on Graphics,vol. 40, pp. 1–15, 12 2021.83
[10]P. Tan, G. Zeng, J. Wang, S. B. Kang, and L. Quan, “Image-based tree modeling,”ACM Trans. Graph., vol. 26, p. 87–es, jul 2007.
[11]C. Li, O. Deussen, Y.-Z. Song, P. Willis, and P. Hall, “Modeling and generatingmoving trees from video,”ACM Trans. Graph., vol. 30, p. 127, 12 2011.
[12]D. Gatziolis, J. F. Lienard, A. Vogs, and N. S. Strigul, “3d tree dimensionality as-sessment using photogrammetry and small unmanned aerial vehicles,”PLOS ONE,vol. 10, pp. 1–21, 09 2015.
[13]O. Argudo, A. Chica Calaf, and C. Andujar, “Single-picture reconstruction and ren-dering of trees for plausible vegetation synthesis,”Computers & Graphics, vol. 57,04 2016.
[14]J.She, X.Guo, X.Tan, andJ.Liu, “3dvisualizationoftreesbasedonasphere-boardmodel,”ISPRS International Journal of Geo-Information, vol. 7, p. 45, 01 2018.
[15]L. Quan, P. Tan, G. Zeng, L. Yuan, J. Wang, and S. B. Kang, “Image-based plantmodeling,”ACM Trans. Graph., vol. 25, p. 599–604, jul 2006.
[16]D. Bradley, D. Nowrouzezahrai, and P. Beardsley, “Image-based reconstruction andsynthesis of dense foliage,”ACM Trans. Graph., vol. 32, jul 2013.
[17]Y. Livny, S. Pirk, Z.-L. Cheng, F. Yan, O. Deussen, D. Cohen-Or, and B. Chen,“Texture-lobes for tree modelling,”ACM Trans. Graph., vol. 30, p. 53, 07 2011.
[18]A. Martinez, I. Martín, and G. Drettakis, “Volumetric reconstruction and interactiverendering of trees from photographs,”ACM Trans. Graph., vol. 23, pp. 720–727, 082004.
[19]A. Jakulin, “Interactive vegetation rendering with slicing and blending,” 09 2000.84
[20]J. Wither, F. Boudon, M. Cani, and C. Godin, “Structure from silhouettes: A newparadigmforfastsketch-baseddesignoftrees,”Computer Graphics Forum, vol.28,pp. 541 – 550, 04 2009.
[21]H. Edelsbrunner and E. P. Mücke, “Three-dimensional alpha shapes,”ACM Trans.Graph., vol. 13, p. 43–72, jan 1994.
[22]K. Hegeman, S. Premoze, M. Ashikhmin, and G. Drettakis, “Approximate ambientocclusion for trees,” vol. 2006, pp. 87–92, 03 2006.
[23]J. Teng, B.-G. Hu, and M. Jaeger, “Fast tree ambient occlusion approximation,”pp. 319 – 322, 12 2006.
[24]P. Song, X. Wu, and M. Y. Wang, “A robust and accurate method for visual hullcomputation,” in2009 International Conference on Information and Automation,pp. 784–789, 2009.
[25]S. Shahrabi, “Procedural painting using genetic evolution algorithmen.”
[26]J.-S. Franco and E. Boyer, “Exact polyhedral visual hulls,”British Machine VisionConference, 09 2003.
[27]J. Dick and F. Pillichshammer,Discrepancy theory and quasi-Monte Carlo integra-tion, pp. 539–619. 01 2014.
[28]J. Phattaralerphong and H. Sinoquet, “A method for 3d reconstruction of tree crownvolume from photographs: Assessment with 3d-digitized plants,”Tree physiology,vol. 25, pp. 1229–42, 11 2005.
[29]S. Bi, N. Khademi Kalantari, and R. Ramamoorthi, “Patch-based optimization forimage-based texture mapping,”ACM Transactions on Graphics, vol. 36, pp. 1–11,07 2017.85
[30]V.LempitskyandD.Ivanov, “Seamlessmosaicingofimage-basedtexturemaps,” in2007 IEEE Conference on Computer Vision and Pattern Recognition, pp.1–6, 2007.
[31]J. T. Kajiya, “The rendering equation,” inComputer Graphics, pp. 143–150, 1986.
[32]T. Nestmeyer, J.-F. Lalonde, I. Matthews, and A. Lehrmann, “Learning physics-guided face relighting under directional light,” pp. 5123–5132, 06 2020.
[33]C. Linz, G. Drettakis, M. Magnor, and A. Reche-Martinez, “Effective multi-resolution rendering and texture compression for captured volumetric trees,”Galin,Eric; Chiba,Norishige: ProceedingsoftheEurographicsWorkshoponNaturalPhe-nomena, Eurographics, 83-90 (2006), 09 2006.
[34]L. Bavoil and M. Sainz, “Screen space ambient occlusion,” 10 2008.
[35]Bentley, “Context capture.”
[36]Interactive Data Visualization, Inc. (IDV), “Speed tree.”
[37]P. Ledda, A. Chalmers, T. Troscianko, and H. Seetzen, “Evaluation of tone mappingoperatorsusingahighdynamicrangedisplay,”ACMTrans.Graph.,vol.24,pp.640–648, 07 2005.
[38]M.G.KENDALLandB.B.SMITH,“ONTHEMETHODOFPAIREDCOMPAR-ISONS,”Biometrika, vol. 31, pp. 324–345, 03 1940.

無法下載圖示 全文公開日期 2025/09/29 (校內網路)
全文公開日期 2025/09/29 (校外網路)
全文公開日期 2025/09/29 (國家圖書館:臺灣博碩士論文系統)
QR CODE