簡易檢索 / 詳目顯示

研究生: 蔡奇霖
QI-LIN CAI
論文名稱: 環繞式掃描下的自動點雲註冊與三維模型重建系統
automatic local point cloud registration algorithm and point cloud reconstruction system
指導教授: 姚智原
Chih-Yuan Yao
余能豪
Neng-Hao Yu
口試委員: 姚智原
Chih-Yuan Yao
余能豪
Neng-Hao Yu
朱宏國
Hung-Kuo Chu
胡敏君
Min-Chun Hu
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 75
中文關鍵詞: 點雲
外文關鍵詞: Point cloud
相關次數: 點閱:323下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 網格(Mesh)廣泛應用於遊戲或者電影動畫製作等各種應用之中。並且為了追求真實感,許多網格會透過3D掃描重建的方式來建立。3D掃描重建需要利用專業儀器偵測並分析現實世界中物體的形狀、顏色、表面反照率等外觀資料,將其轉換成以離散形式表示物體表面的點雲(Point Cloud),並再透過頂點的法向量等相關資訊去分析頂點與頂點之間的連接關係,將點雲重建成為3D網格。獲得3D網格之後,我們還需要為3D網格建立3D立體與2D平面的投影關係,再利用掃描獲得的彩色資訊為3D網格建立2D平面貼圖,並對2D貼圖進行優化。至此才能完成一個完整的3D掃描重建。然而掃描獲得的深度資訊並非都在統一的全局空間下,需要透過對齊將所有深度資訊獲得的點雲整合到同一空間中。這步驟稱之為點雲對齊(Point Cloud Registration)。

    點雲對齊為3D掃描重建上的一個經典議題,而點雲對齊演算法又以最近點迭代演算法(Iterative Closest Point)\cite{ICPObjective}為主。但該演算法需要手動輔助以避免收斂後落入區域最小值。為了解決上述問題,本文針對環繞式掃描的情況,提出了一種可以減少誤差放大的自動點雲對齊演算法Group ICP,以及一種點雲對齊結果的評量方式。我們還提出了一種使用Group ICP的3D網格重建系統,完整地建制從掃描到建制含貼圖的網格。並在與市面上其他掃描重建系統的比較中獲得更加優秀的成績。


    Mesh is widely used in various applications such as game or movie animation production.
    In order to pursue realism, many meshes be created through 3D scanning. reconstruction.
    3D scanning reconstruction requires the use of professional equipments to detect and analyze the appearance data such as the shape and color of the object or environment in the real world, albedo rate, and so on.
    Point cloud is a representation of discrete data obtained by this method, and then use it to calculate the connection relationship between the vertex and the vertex by the normal and other related information of the vertex, and finally reconstruct the point cloud into a 3D mesh.
    After obtaining the 3D mesh, we also need to establish the projection between 3D to 2D for the mesh, and use the color information obtained by scanning to create a 2D texture for the 3D mesh, and then optimize the 2D texture.

    Point cloud registration is a fundamental problem in 3D model reconstruction, and it is mostly based on Iterative Closest Point(ICP)\cite{ICPObjective} which requires manual assistance to avoid falling into the local minimum.
    We proposed an automatic point cloud registration algorithm "Group ICP" that can reduce error in the case of surround scanning.
    Our system is an automatic point cloud registration method to reconstruct mesh from point cloud and reconstruct the texture form different view when scanning.
    We also compare our system with commercial scanner and applications, and our result out stand for others.

    目 錄 論文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II 誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III 目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV 圖目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VI 表目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VIII 1 介紹 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 點雲對齊 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 點雲重建 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 參數化 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 網格簡化 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 相關研究 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1 點雲對齊 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.1 全局點雲對齊 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.2 區域點雲對齊 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.3 神經網路點雲對齊 . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 網格重建 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.1 三角化方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2 曲面擬合法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 參數化 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3 方法總覽 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.1 掃描 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 點雲前處理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.1 網格重建 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4 研究方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.1 掃描 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2 Group ICP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3 網格重建 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.4 網格參數化 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.5 貼圖處理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.5.1 一般特徵貼圖處理 . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.5.2 銳利特徵貼圖處理 . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.6 減面 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5 實驗結果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.1 評估方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2 實做細節 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6 結論與未來工作 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 7 附錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 參考文獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 授權書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

    [1] S. Rusinkiewicz, “A symmetric objective function for ICP,” ACM Transactions on
    Graphics (Proc. SIGGRAPH), vol. 38, July 2019.
    [2] P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256,
    1992.
    [3] M. M. Kazhdan and H. Hoppe, “Screened poisson surface reconstruction.,” ACM
    Trans. Graph., vol. 32, no. 3, pp. 29:1–29:13, 2013.
    [4] N. Mellado, D. Aiger, and N. J. Mitra, “Super 4pcs fast global pointcloud registration
    via smart indexing,” Computer Graphics Forum, vol. 33, no. 5, pp. 205–215, 2014.
    [5] A. V. Segal, D. Haehnel, and S. Thrun, “Generalized-icp.”
    [6] M. Krogius, A. Haggenmiller, and E. Olson, “Flexible layouts for fiducial tags,” in
    Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
    Systems (IROS), 2019.
    [7] J. Wang and E. Olson, “AprilTag 2: Efficient and robust fiducial detection,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016.
    [8] E. Olson, “AprilTag: A robust and flexible visual fiducial system,” in Proceedings of
    the IEEE International Conference on Robotics and Automation (ICRA), pp. 3400–
    3407, IEEE, May 2011.
    [9] N. Mellado et al., “Opengr: A c++ library for 3d global registration.” https://stormirit.github.io/OpenGR/, 2017.
    [10] D. Aiger, N. J. Mitra, and D. Cohen-Or, “4-points congruent sets for robust surface
    registration,” ACM Transactions on Graphics, vol. 27, no. 3, pp. #85, 1–10, 2008.
    [11] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model
    fitting with applications to image analysis and automated cartography,” Commun.
    ACM, vol. 24, p. 381–395, June 1981.
    [12] K. Pulli, “Multiview registration for large data sets,” pp. 160–168, 1999.
    [13] S. Rusinkiewicz and M. Levoy, “Efficient variants of the icp algorithm,” in Proceedings Third International Conference on 3-D Digital Imaging and Modeling, pp. 145–
    152, 2001.
    [14] N. Gelfand, L. Ikemoto, S. Rusinkiewicz, and M. Levoy, “Geometrically stable sampling for the ICP algorithm,” in Fourth International Conference on 3D Digital
    Imaging and Modeling (3DIM), Oct. 2003.
    [15] M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton,
    S. Anderson, J. Davis, J. Ginsberg, J. Shade, and D. Fulk, “The digital michelangelo
    project: 3d scanning of large statues,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’00, (USA),
    p. 131–144, ACM Press/Addison-Wesley Publishing Co., 2000.
    [16] T. Pajdla and L. Van Gool, “Matching of 3-d curves using semi-differential invariants,” in Proceedings of the Fifth International Conference on Computer Vision,
    ICCV ’95, (USA), p. 390, IEEE Computer Society, 1995.
    [17] Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Int. J. Comput. Vision, vol. 13, p. 119–152, Oct. 1994.
    [18] T. Jost and H. Hügli, “Fast icp algorithms for shape registration,” in Pattern Recognition (L. Van Gool, ed.), (Berlin, Heidelberg), pp. 91–99, Springer Berlin Heidelberg,
    2002.
    [19] C. Dorai, G. Wang, A. K. Jain, and C. Mercer, “Registration and integration of multiple object views for 3d model construction,” IEEE Transactions on Pattern Analysis
    and Machine Intelligence, vol. 20, no. 1, pp. 83–89, 1998.
    [20] S. Weik, “Registration of 3-d partial surface models using luminance and depth information,” in Proceedings. International Conference on Recent Advances in 3-D
    Digital Imaging and Modeling (Cat. No.97TB100134), pp. 93–100, 1997.
    [21] Y. Chen and G. Medioni, “Object modelling by registration of multiple range images,” Image Vision Comput., vol. 10, p. 145–155, Apr. 1992.
    [22] A. Censi, “An icp variant using a point-to-line metric,” in 2008 IEEE International
    Conference on Robotics and Automation, pp. 19–25, 2008.
    [23] Y. Wang and J. M. Solomon, “Deep closest point: Learning representations for
    point cloud registration,” in The IEEE International Conference on Computer Vision (ICCV), October 2019.
    [24] G. Elbaz, T. Avraham, and A. Fischer, “3d point cloud registration for localization
    using a deep neural network auto-encoder,” in 2017 IEEE Conference on Computer
    Vision and Pattern Recognition (CVPR), pp. 2472–2481, 2017.
    [25] Z. J. Yew and G. H. Lee, “Rpm-net: Robust point matching using learned features,”
    in Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
    [26] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for
    3d classification and segmentation,” arXiv preprint arXiv:1612.00593, 2016.
    [27] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature
    learning on point sets in a metric space,” arXiv preprint arXiv:1706.02413, 2017.
    [28] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic
    graph cnn for learning on point clouds,” ACM Transactions on Graphics (TOG),
    2019.
    [29] B. Delaunay, “Sur la sphère vide. a la mémoire de georges voronoï,” Bulletin
    de l’Académie des Sciences de l’URSS. Classe des sciences mathématiques et na,
    pp. 793–800, 1934.
    [30] W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolution 3d surface
    construction algorithm,” in Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’87, (New York, NY, USA),
    p. 163–169, Association for Computing Machinery, 1987.
    [31] W. T. Tutte, “How to draw a graph,” Proceedings of the London Mathematical Society, vol. s3-13, no. 1, pp. 743–767, 1963.
    [32] A. Telea, “An image inpainting technique based on the fast marching method,” Journal of Graphics Tools, vol. 9, no. 1, pp. 23–34, 2004.
    [33] M. Li, D. M. Kaufman, V. G. Kim, J. Solomon, and A. Sheffer, “Optcuts: Joint
    optimization of surface cuts and parameterization,” ACM Transactions on Graphics,
    vol. 37, no. 6, 2018.
    [34] P. Heckbert and M. Garl, “Survey of polygonal surface simplification algorithms,”
    p. 4, 11 1997.
    [35] A. Telea, “An image inpainting technique based on the fast marching method,” Journal of Graphics Tools, vol. 9, 01 2004.
    [36] S. Liu, Z. Ferguson, A. Jacobson, and Y. Gingold, “Seamless: Seam erasure and
    seam-aware decoupling of shape from mesh resolution,” ACM Transactions on
    Graphics (TOG), vol. 36, pp. 216:1–216:15, Nov. 2017.
    [37] M. Garland and P. S. Heckbert, “Simplifying surfaces with color and texture using
    quadric error metrics,” in Proceedings of the Conference on Visualization ’98, VIS
    ’98, (Washington, DC, USA), p. 263–269, IEEE Computer Society Press, 1998.
    [38] M. Tatarchenko, S. R. Richter, R. Ranftl, Z. Li, V. Koltun, and T. Brox, “What do
    single-view 3d reconstruction networks learn?,” 2019.
    [39] S. Zhao, J. Cui, Y. Sheng, Y. Dong, X. Liang, E. I. Chang, and Y. Xu, “Large scale
    image completion via co-modulated generative adversarial networks,” in International Conference on Learning Representations (ICLR), 2021.
    [40] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image inpainting
    with contextual attention,” arXiv preprint arXiv:1801.07892, 2018.
    [41] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Free-form image inpainting
    with gated convolution,” arXiv preprint arXiv:1806.03589, 2018.
    [42] F. Poiesi and D. Boscaini, “Distinctive 3D local deep descriptors,” in IEEE Proc. of
    Int’l Conference on Pattern Recognition, (Milan, IT), Jan 2021.
    [43] S. Ao, Q. Hu, B. Yang, A. Markham, and Y. Guo, “Spinnet: Learning a general
    surface descriptor for 3d point cloud registration,” in Proceedings of the IEEE/CVF
    Conference on Computer Vision and Pattern Recognition, 2021.

    無法下載圖示 全文公開日期 2031/08/10 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE