簡易檢索 / 詳目顯示

研究生: 宋政哲
Cheng-Che Sung
論文名稱: 基於深度學習的場景點雲結構化策略
Deep Learning­based Point Cloud Structurization
指導教授: 莊子毅
Tzu-Yi Chuang
口試委員: 趙鍵哲
Jen-Jer Jaw
謝佑明
Yo-Ming Hsieh
陳鴻銘
Hung-Ming Chen
學位類別: 碩士
Master
系所名稱: 工程學院 - 營建工程系
Department of Civil and Construction Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 109
中文關鍵詞: 點雲結構化三維空間深度學習建築資訊模型點雲角點偵測點雲角點向量化
外文關鍵詞: 3D deep learning, point cloud reconstruction, building information mod­eling, point cloud corner detection, point cloud corner vectorization
相關次數: 點閱:243下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著三維感測技術的進步,三維點雲分析儼然已成為場景理解、物件分類與辨識等研究之趨勢,然而點雲的隨機與不均勻等特性以及其龐大的資料量,往往 需要資料結構化等預處理以便後續空間資訊等應用,但傳統點雲結構化處理之處理程序繁瑣並需要人工干涉以萃取出可靠的點、線、面幾何特徵,並重建特徵間之相互關係,除此之外,現有方法鮮少直接提供點雲物件之語義類別訊息,且在建模的細緻度上多屬於 level of detail 2 (LOD2)。有鑑於此,本研究提出基於深度 學習的點雲結構化方法,並以既有的 building information modeling (BIM) 模型產生訓練網路模型的點雲資料,並以建築物的基本幾何結構物如板、梁、柱、牆為結 構化目標,研擬基於深度學習演算法之點雲自動結構化策略。
    本研究將結構化程序依其目的分為兩階段的深度學習任務,分別為點雲角點萃取模型及點雲角點向量化模型,並透過 BIM 模型自動產生點雲訓練資料,改 善現行深度學習中點雲訓練資料難以取得的缺點。由實際資料測試成果驗證了提出方法之可行性,在分類品質指標中,每項類別 average precision (AP) 值可達到 50% 以上,其中牆類別的 AP 值可達到 76.8%;在角點位置之回歸指標中,預測角點位置之誤差最高不超過 25 公分,其中梁類別之角點誤差小於 10 公分;角點向量化模型的連結預測正確率可達 70%以上。因此本研究提出的基於深度學習點雲結構化策略,確實可自動產製目標物件之幾何模型,更可作為後續應用之模型基礎,大幅提升作業自動化程度並獲得降低幾何建模專業門檻之成效。


    With the progress of 3D sensing technology, 3D point cloud analysis is playing an important role in scene understanding, object classification and identification, etc. However, the random and uneven characteristics of point clouds and the huge amount of data often require data reconstruction and other pre-processing for subsequent spatial information applications. And, the current point cloud reconstruction procedures are quite complicated and demand manual intervention for acquiring reliable geometric features of points, lines, and planes. The relationships among these features need to be established additionally to achieve LoD 2 building models. Moreover, few existing methods would provide semantic information of the objects during the reconstruction process simultaneously.

    In light of this, this study proposes a novel learning-based point cloud structurization method especially for building components such as plates, beams, columns, and walls. The proposed model net learns from the point clouds generated by existing building information modeling (BIM) components to predict the geometric model of a newly given point cloud consequently. It is worthy to note that the BIM-to-point cloud approach overcomes the difficulty of 3D training data acquisition that usually confronted in most deep learning applications.

    The proposed model net is twofold consisted of the point cloud corner detection and the point cloud corner vectorization models. The quantitative and qualitative indexes derived from the large-scale real data verification show promising results and prove the effectiveness of the proposed method. The average precision (AP) of each category achieved an precision of above 50% in the classification task, in which the wall category yielded an precision of 77%. On the other hand, the predicted object corner positions reported an accuracy better than 25 cm, in which the beam category revealed an accuracy of up to 10 cm. Moreover, the quality of the geometric model in vector-linking prediction reached an precision of 70% suggesting that the proposed learning-based method can indeed reconstruct the geometric model of a given point cloud automatically. Last but not least, the resultant models of the proposed method can be deemed as a fundamental model for further processing or applications. Thus, the automation level for building model reconstruction can be increased, and the professional threshold can be reduced to assist related technicians.

    論文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II 誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV 目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V 圖目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IX 表目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XII 1 緒論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 研究背景與目的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 研究方法與流程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 論文架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 文獻回顧 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1 建築資訊模型 (BIM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.1 BIM 於國內外營造產業發展現況 . . . . . . . . . . . . . . . . 4 2.1.2 BIM 模型結合點雲資料研究整理 . . . . . . . . . . . . . . . . 5 2.2 三維空間資料格式 (3D shape representation) . . . . . . . . . . . . . . . 6 2.2.1 點雲 (point cloud) . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 網格 (mesh) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.3 體素 (voxel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.4 小結 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 深度學習於三維空間資訊之應用 . . . . . . . . . . . . . . . . . . . . . 12 2.3.1 多視角的三維特徵表示方法 . . . . . . . . . . . . . . . . . . . 15 2.3.2 體素的三維特徵表示方法 . . . . . . . . . . . . . . . . . . . . 15 2.3.3 點雲的三維特徵表示方法 . . . . . . . . . . . . . . . . . . . . 16 2.4 建物點雲結構化 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.1 建築物數值模型 . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.2 場景點雲結構化步驟 . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.3 場景點雲結構化 . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5 總結 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3 研究方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.1 使用工具及軟體說明 . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.1 BIM 建模軟體 . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.2 CloudCompare 點雲處理軟體 . . . . . . . . . . . . . . . . . . . 28 3.2 模型訓練資料說明 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.1 場景點雲資料切割 . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.2 點雲角點局部特徵萃取 . . . . . . . . . . . . . . . . . . . . . . 31 3.3 場景物件角點萃取模型 . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.3.1 點雲候選點採樣 . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.3.2 點雲中心偏移回歸 . . . . . . . . . . . . . . . . . . . . . . . . 34 3.3.3 點雲目標檢測 . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.4 點雲角點向量化重建 . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.4.1 圖神經網路 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.4.2 三維空間資料轉換 . . . . . . . . . . . . . . . . . . . . . . . . 36 3.4.3 圖網路序列化構建 . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4.4 GRAN 圖神經網路 . . . . . . . . . . . . . . . . . . . . . . . . 38 3.5 深度學習模型評估指標 . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.5.1 分類指標名詞定義 . . . . . . . . . . . . . . . . . . . . . . . . 39 3.5.2 場景物件角點萃取模型評估指標 . . . . . . . . . . . . . . . . 41 3.5.3 點雲角點向量化模型評估指標 . . . . . . . . . . . . . . . . . . 43 4 實驗設計 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.1 資料預處理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.1.1 場景資料分割與標記 . . . . . . . . . . . . . . . . . . . . . . . 45 4.1.2 資料數據擴增 . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.1.3 訓練與測試數據集 . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2 深度學習模型架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2.1 模型損失函數 . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.2.2 模型訓練參數 . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5 實驗結果與分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.1 實驗測試結果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.1.1 點雲角點萃取模型測試結果 . . . . . . . . . . . . . . . . . . . 56 5.1.2 點雲角點向量化模型預測結果 . . . . . . . . . . . . . . . . . . 66 5.2 加入雜訊點雲影響評估 . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.2.1 不同雜訊點數量對模型分類結果影響 . . . . . . . . . . . . . . 71 5.2.2 不同雜訊點數量對角點回歸結果影響 . . . . . . . . . . . . . . 72 5.3 加入點雲隨機誤差量影響評估 . . . . . . . . . . . . . . . . . . . . . . 74 5.3.1 不同點雲隨機誤差量對模型分類結果影響 . . . . . . . . . . . 74 5.3.2 不同點雲隨機誤差量對角點回歸結果影響 . . . . . . . . . . . 76 5.4 訓練資料切割範圍影響評估 . . . . . . . . . . . . . . . . . . . . . . . 77 5.4.1 場景切割範圍對角點萃取模型分類結果影響 . . . . . . . . . . 77 5.4.2 場景切割範圍對角點萃取模型回歸結果影響 . . . . . . . . . . 78 5.5 模型輸入點數影響評估 . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.5.1 模型輸入點數對角點萃取模型分類結果影響 . . . . . . . . . . 79 5.5.2 模型輸入點數對角點萃取模型回歸結果影響 . . . . . . . . . . 81 6 結論與後續工作 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.1 結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.1.1 點雲角點特徵萃取模型 . . . . . . . . . . . . . . . . . . . . . . 82 6.1.2 點雲角點向量化模型 . . . . . . . . . . . . . . . . . . . . . . . 83 6.1.3 總結 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.2 後續工作 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 參考文獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 授權書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

    [1] EastmanC.,FisherD.,LafueG.,MuthuavinashiappanD.,StokerD.,andYessiosC. An outline of the building description system, 1974.
    [2] 內政部建築研究所協同研究報告. 國內外推動 bim 之策略與成效比較研究, 2015.
    [3] Bosché F., Ahmed M., Turkan Y., Haas C. T., and Haas R. The value of integrat¬ ing scan¬to¬bim and scan¬vs¬bim techniques for construction monitoring using laser scanning and bim: The case of cylindrical mep components. Automation in Con¬ struction, 49:201–213, 2015.
    [4] ChenJ.andChoY.K.Point¬to¬pointcomparisonmethodforautomatedscan¬vs¬bim deviation detection. In Proceedings of 17th International Conference on Computing in Civil and Building Engineering, 2018.
    [5] Bassier M., Yousefzadeh M., and Vergauwen M. Comparison of 2d and 3d wall reconstruction algorithms from point cloud data for as¬built bim. Journal of Infor¬ mation Technology in Construction, 25:173–192, 2020.
    [6] Ahmed E., Saint A., Shabayek A. E. R., Cherenkova K., Das R., Gusev G., Aouada D., and Ottersten B. E. Deep learning advances on different 3d data representations: A survey. CoRR, abs/1808.01462, 2018.
    [7] Bronstein M. M., Bruna J., LeCun Y., Szlam A., and Vandergheynst P. Geometric deep learning: going beyond euclidean data. CoRR, abs/1611.08097, 2016.
    [8] Stanford University Computer Graphics Laboratory. The ”stanford bunny”, 1993.
    [9] Ullman S. The interpretation of structure from motion. Proceedings of the Royal
    Society of London. Series B, Biological sciences, 203(1153):405–426, 1979.
    [10] 鄭傑文. 射影幾何於攝影測量之應用, 2007. 國立台灣大學土木研究所碩士論
    文.
    [11] Paracosm. px¬80¬overview, 2020. https://paracosm.io/px-80-overview.
    [12] Ensenso. Ensenso¬n35, 2020. https://www.ensenso.com/portfolio-item/ n3x.
    [13] Canon. Canon¬eos, 2020. https://tw.canon/zh_TW/consumer/eos-r-body/ product.
    [14] Turbosquid. Statue, 2020. https://www.turbosquid.com/3d-model/free/ statue.
    [15] Plotz. Sphere, 2020. https://www.plotz.co.uk/plotz-model.php?model= Sphere.
    [16] Guo Y., H. Wang, Hu Q., Liu H, Liu Li, and Bennamoun M. Deep learning for 3d point clouds: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, page 1, 2019.
    [17] Vishwanath K. V., Gupta D., Vahdat A., and Yocum K. Modelnet: Towards a dat¬ acenter emulation environment. In 2009 IEEE Ninth International Conference on Peer¬to¬Peer Computing, pages 81–82, 2009.
    [18] Su H., Maji S., Kalogerakis E., and Learned¬Miller E. Multi¬view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015.
    [19] Yu T., Meng J., and Yuan J. Multi¬view harmonized bilinear network for 3d object recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
    [20] YangZ.andWangL.Learningrelationshipsformulti-view3dobjectrecognition.In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7504– 7513, 2019.
    [21] Qi C. R., Su H., Nießner M., Dai A., Yan M., and Guibas L. J. Volumetric and multi¬view cnns for object classification on 3d data. CoRR, abs/1604.03265, 2016.
    [22] Maturana D. and Scherer S. Voxnet: A 3d convolutional neural network for real¬ time object recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 922–928, 2015.
    [23] Wu Z., Song S., Khosla A., Yu F., Zhang L., Tang X., and Xiao J. 3d shapenets: A deep representation for volumetric shapes. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1912–1920, 2015.
    [24] Riegler G., Ulusoy A. O., and Geiger A. Octnet: Learning deep 3d representations at high resolutions. CoRR, abs/1611.05009, 2016.
    [25] Qi C. R., Su H., Mo K., and Guibas L. J. Pointnet: Deep learning on point sets for 3d classification and segmentation. CoRR, abs/1612.00593, 2016.
    [26] Qi C. R., Yi L., Su H., and Guibas L. J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. CoRR, abs/1706.02413, 2017.
    [27] Achlioptas P., Diamanti O., Mitliagkas I., and Guibas L. J. Representation learning and adversarial generation of 3d point clouds. CoRR, abs/1707.02392, 2017.
    [28] Joseph¬RivlinM.,ZvirinA.,andKimmelR.Mo-net:Flavorthemomentsinlearning to classify shapes. CoRR, abs/1812.07431, 2018.
    [29] ZhaoH.,JiangL.,FuC.,andJiaJ.Pointweb:Enhancinglocalneighborhoodfeatures for point cloud processing. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5560–5568, 2019.
    [30] DuanY.,ZhengY.,LuJ.,ZhouJ.,andTianQ.Structuralrelationalreasoningofpoint clouds. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 949–958, 2019.
    [31] Sun X., Lian Z., and Xiao J. Srinet: Learning strictly rotation¬invariant representa¬ tions for point cloud classification and segmentation, 2019.
    [32] Liu Y., Fan B., Xiang S., and Pan C. Relation¬shape convolutional neural network for point cloud analysis. CoRR, abs/1904.07601, 2019.
    [33] Wu W., Qi Z., and Fuxin L. Pointconv: Deep convolutional networks on 3d point clouds, 2018.
    [34] Cohen T. S., Geiger M., Koehler J., and Welling M. Spherical cnns, 2018.
    [35] Li Y., Bu R., Sun M., Wu W., Di X., and Chen B. Pointcnn: Convolution on X ¬ transformed points, 2018.
    [36] Komarichev A., Zhong Z., and Hua J. A¬cnn: Annularly convolutional neural net¬ works on point clouds, 2019.
    [37] Rao Y., Lu J., and Zhou J. Spherical fractal convolutional neural networks for point cloud recognition. In The IEEE Conference on Computer Vision and Pattern Recog¬ nition (CVPR), 2019.
    [38] Simonovsky M. and Komodakis N. Dynamic edge¬conditioned filters in convolu¬ tional neural networks on graphs, 2017.
    [39] WangY.,SunY.,LiuZ.,SarmaS.E.,BronsteinM.M.,andSolomonJ.M.Dynamic graph cnn for learning on point clouds, 2018.
    [40] Hassani K. and Haley M. Unsupervised multi¬task feature learning on point clouds, 2019.
    [41] Liu J., Ni B., Li C., Yang J., and Tian Q. Dynamic points agglomeration for hi¬ erarchical point sets learning. In The IEEE International Conference on Computer Vision (ICCV), 2019.
    [42] Gröger G. and Plümer L. Citygml–interoperable semantic 3d city models. ISPRS Journal of Photogrammetry and Remote Sensing, 71:12–33, 2012.
    [43] Francisco Vázquez. Función de la documentación catastral. nuevos métodos. EGE¬ Expresión Gráfica en la Edificación, page 89, 2014.
    [44] S. Orts¬Escolano, V. Morell, J. García¬Rodríguez, and M. Cazorla. Point cloud data filtering and downsampling using growing neural gas. In The 2013 International Joint Conference on Neural Networks (IJCNN), pages 1–8, 2013.
    [45] HanX.F.,JinJ.S.,WangM.J.,JiangW.,GaoL.,andXiaoL.Areviewofalgorithms for filtering the 3d point cloud. Signal Processing: Image Communication, 57:103 – 112, 2017.
    [46] Kang C., Lu T., Zong M., Wang F., and Cheng Y. Point cloud smooth sampling and surface reconstruction based on moving least squares. ISPRS ¬ International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII¬3/W10:145–151, 2020.
    [47] Bhanu B., Lee S., Ho C., and Henderson T. Range data processing: Representation of surfaces by edges, 1896.
    [48] Besl P. J. and Jain R. C. Segmentation through variable¬order surface fitting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10(2):167–192, 1988.
    [49] Biosca J. M. and Lerma J. L. Unsupervised robust planar segmentation of terrestrial laser scanner point clouds based on fuzzy clustering methods. ISPRS Journal of Photogrammetry and Remote Sensing, 63:84–98, 2008.
    [50] Fischler M. A. and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communica¬ tions of the ACM, 24(6):381–395, 1981.
    [51] Golovinskiy A. and Funkhouser T. Min¬cut based segmentation of point clouds. In IEEE Workshop on Search in 3D and Video (S3DV) at ICCV, 2009.
    [52] A. Nurunnabi, Belton D., and Geoff W. Robust segmentation in laser scanning 3d point cloud data. In 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), pages 1–8, 2012.
    [53] Rusu R. B., Blodow N., and Beetz M. Fast point feature histograms (fpfh) for 3d registration. In 2009 IEEE International Conference on Robotics and Automation, pages 3212–3217, 2009.
    [54] JiangX.Y.,BunkeH.,andMeierU.Fastrangeimagesegmentationusinghigh-level segmentation primitives. In Proceedings of the 3rd IEEE Workshop on Applications of Computer Vision (WACV ’96), page 83, 1996.
    [55] Sappa A. D. and M. Devy. Fast range image segmentation by an edge detection strategy. In 3D Digital Imaging and Modeling, International Conference on, pages 292–299, 2001.
    [56] Koster K. and Spann M. Mir: an approach to robust clustering¬application to range image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelli¬ gence, 22(5):430–444, 2000.
    [57] Rusu R. B., Marton Z. C., Blodow N., Dolha M. E., and Beetz M. Towards 3d point cloud based object maps for household environments. Robotics and Autonomous Systems, 56:927–941, 2008.
    [58] Ning X., Zhang X., Wang Y., and Jaeger Marc. Segmentation of architecture shape information from 3d point cloud. In Proceedings of the 8th International Conference on Virtual Reality Continuum and Its Applications in Industry, page 127–132, 2009.
    [59] Dorninger P. and Nothegger C. 3d segmentation of unstructured point clouds for building modelling. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 36, 2007.
    [60] Filin S. Surface clustering from airborne laser scanning data. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 34, 2012.
    [61] Vosselman G. and Dijkman S. 3d building model reconstruction from point clouds and ground plans. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 34(3/W4):37–43, 2001.
    [62] Schnabel R., Wahl R., and Klein R. Efficient ransac for point¬cloud shape detection. Computer Graphics Forum, 26(2):214–226, 2007.
    [63] Gelfand N. and Guibas L. J. Shape segmentation using local slippage analysis. In Proceedings of the 2004 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing, page 214–223, 2004.
    [64] Tarsha¬Kurdi F., Landes T., and Grussenmeye P. Hough¬transform and extended ransac algorithms for automatic detection of 3d building roof planes from lidar data. ISPRS Workshop on Laser Scanning 2007 and SilviLaser 2007, pages 407–412, 2007.
    [65] Li Y., Wu X., Chrysathou Y., Sharf A., Cohen¬Or D., and Mitra N. J. Globfit: Con¬ sistently fitting primitives by discovering global relations. ACM SIGGRAPH 2011 papers, 30(4), 2011.
    [66] Rusu R. B., Holzbach A., Blodow N., and Beetz M. Fast geometric point labeling using conditional random fields. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 7–12, 2009.
    [67] Schoenberg J. R., Nathan A., and Campbell M. Segmentation of dense range infor¬ mation in complex urban scenes. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2033–2038, 2010.
    [68] BergerM.,TagliasacchiA.,SeverskyL.M.,AlliezP.,GuennebaudG.,LevineJ.A., Sharf A., and Silva C. T. A survey of surface reconstruction from point clouds. Computer Graphics Forum, 36(1):301–329, 2017.
    [69] Zhang K., Yan J., and Chen S. C. Automatic construction of building footprints from airborne lidar data. IEEE Transactions on Geoscience and Remote Sensing, 44:2523–2533, 2006.
    [70] Zhou Q. Y. and Neumann U. Fast and extensible building modeling from airborne lidar data. In Proceedings of the 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 2008.
    [71] Poullis C. and You S. Automatic reconstruction of cities from remote sensor data. 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 2775– 2782, 2009.
    [72] Sampath A. and Shan J. Segmentation and reconstruction of polyhedral building roofs from aerial lidar point clouds. IEEE Transactions on Geoscience and Remote Sensing, 48:1554–1567, 2010.
    [73] Haala N., Brenner C., and K. H. Anders. 3d urban gis from laser altimeter and 2d map data. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 32, 2001.
    [74] Zheng Y. and Weng Q. Model¬driven reconstruction of 3¬d buildings using lidar data. IEEE Geoscience and Remote Sensing Letters, 12(7):1541–1545, 2015.
    [75] Zhou Q. Y. and Neumann U. 2.5d dual contouring: A robust approach to creating building models from aerial lidar point clouds. In Computer Vision – ECCV 2010, pages 115–128, 2010.
    [76] Ju T., Losasso F., Schaefer S., and Warren J. Dual contouring of hermite data. In
    SIGGRAPH ’02: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, page 339–346, 2002.
    [77] Callahan S., Lindstrom P., Pascucci V., and Silva C. Streaming simplification of tetrahedral meshes. IEEE transactions on visualization and computer graphics, 13:145–55, 2007.
    [78] Lafarge F. and Mallet C. Creating large¬scale city models from 3d¬point clouds: A robust approach with hybrid representation. International Journal of Computer Vision, 99:69–85, 2012.
    [79] C.P. Lo and Albert K. W. Yeung. Concepts and Techniques of Geographic Informa¬ tion Systems, pages 89–91. 2002.
    [80] Yang Z., Sun Y., Liu S., and Jia J. 3dssd: Point¬based 3d single stage object de¬ tector. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020.
    [81] LiaoR.,LiY.,SongY.,WangS.,NashC.,HamiltonW.L.,DuvenaudD.,UrtasunR., and Zemel R. S. Efficient graph generation with graph recurrent attention networks, 2019.
    [82] Li Y., Tarlow D., Brockschmidt M., and Zemel R. Gated graph sequence neural networks, 2015.

    無法下載圖示 全文公開日期 2025/08/24 (校內網路)
    全文公開日期 2025/08/24 (校外網路)
    全文公開日期 2025/08/24 (國家圖書館:臺灣博碩士論文系統)
    QR CODE