簡易檢索 / 詳目顯示

研究生: 游淳閔
Chun-Min Yu
論文名稱: 應用自動空拍機的建築物三維影像建模
Image-based 3D Model Construction of Buildings using Autonomous Quadcopter
指導教授: 項天瑞
Tien-Ruey Hsiang
口試委員: 鄧惟中
Wei-Chung Teng
邱舉明
Ge-Ming Chiu
郭重顯
Chung-Hsien Kuo
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 英文
論文頁數: 49
中文關鍵詞: 空拍機自動控制三維建模影像定位與定址
外文關鍵詞: Quadcopter, Autonomous, 3D Model Reconstruction, Image-based SLAM
相關次數: 點閱:402下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文主要宗旨為對於建築物(別墅等住宅)實作自動化環繞飛行,並使用飛行路徑中之連續影像進行後製建屋建模等成果;主要使用了四軸飛行器裝載單一鏡頭之影像來實作即時定位與地圖建構系統,產生路徑規劃所需之資訊,以達到房屋全景環繞之行為,並計算點雲圖之成果,即時找出其缺陷處,將之補齊。

    對於實際工程應用來說,當應用需要對於高樓大廈使用紅外線探測儀進行全角度拍攝時,若使用一般遙控器操控,即會發生飛行器位於大樓後面,飛行器與操作人員間的控制系統將容易斷訊,即可能產生無法預測之危險,而本論文所開發之系統即可將當下緊急狀況切換至自動飛行以完成未掃描之行程。

    本論文實作出自動化全景建屋建模系統,主要取代了先前需以人工操作方式中,耗時耗力來達到對於一建物或是大型藝術品等掃描建模動作;當完整拍攝後,另需要後續處理拍攝未完全之區域而造成人力上額外負擔。本論文所發展出之系統主要能克服上述之問題,並且能進一步瞭解拍攝物體於掃描過程中何處產生缺漏且重新安排路徑進行補掃;而實作方面則使用四軸飛行器「IRIS+」,其核心飛控版為Pixhawk,使其搭載單一鏡頭,放置於欲拍攝之物體前,下達一飛行指令即可完成所有拍攝物體之不同方面與高度,並能即時運算出某一區域是否掃描未完整,安排最佳化拍攝路徑,前往補齊;最後再比較此自動化路徑與手動飛行路徑之掃描結果,並使用兩者之雲對雲距離或是模型之細部表面來呈現本系統之好壞。在省時、省力與低成本下達到安全大規模之工程應用。


    In this paper, we present a monocular vision-based autonomous exploration system by using an open source platform quadcopter. The main propose of this system is building reconstruction which based on a semi-dense point cloud model in automation. This automationsystemcansubstitutethehumanworks. Inthepast,theseworksmightencounter somecriticalproblems. Whenthequadcopterfliedbehindabuildingandtheoperatordoes nottakecorrectiveaction,itmaycauseweakcommunicationsignal. Theseproblemscan be overcome by developing a self-aware autonomous system. In our system, we use a visual-basedSLAMsystemandproposeanavigationsystemtoexploreun-scannedparts of a building. We develop our incremental motion planning method and use a point distributed estimation method for on-line detecting the weak part of the current point cloud modelineachoccupancymaplayer. Accordingtoformerdetectedresults,wesetuprescantrajectorytofixtheweakpart. Inourexperiments,weutilizedifferencesbetweenthe cloud-to-clouddistanceandtiledmodelresolutiontocomparethemanualwithautoflight reconstructionresults.

    論⽂指導教授推薦書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i 考試委員審定書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv 誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v TableofContents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi ListofTables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii ListofFigures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 RelatedWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1 VisualSLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 VisualSLAMusingQuadcopter . . . . . . . . . . . . . . . . . . . . . . 4 2.3 3DModelEvaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 SystemDesign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1 QuadcopterPlatform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 CameraUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.1 StartingPosition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.2 LSD-SLAMSystemandOccupancyMap . . . . . . . . . . . . . . . . . 10 4.3 InitialScan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.3.1 TrajectoryGeneration . . . . . . . . . . . . . . . . . . . . . . . 13 vi 4.3.2 StopCondition . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.4 WeakPointSelection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.5 RescanProcess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.1 DefectAreaEnhanceResult . . . . . . . . . . . . . . . . . . . . . . . . 23 5.2 Cloud-to-cloudDistanceEstimation . . . . . . . . . . . . . . . . . . . . 25 5.3 ModelResultComparison . . . . . . . . . . . . . . . . . . . . . . . . . 28 6 Conclusions&FutureWorks . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    [1] H.Seydoux,“Parrotsaofficialwebsite.”http://www.parrot.com/usa/.
    [2] B.AB,“Bitcrazeofficialwebsite.”https://www.bitcraze.io/.
    [3] P.devteam,“Pixhawkofficialwebsite.”https://pixhawk.org/.
    [4] A. Chris and M. Jordi, “Apm official website.” http://ardupilot.org/ ardupilot/index.html.
    [5] P.E.Hart,N.J.Nilsson,andB.Raphael,“Aformalbasisfortheheuristicdeterminationofminimumcostpaths,”IEEEtransactionsonSystemsScienceandCybernetics, vol.4,no.2,pp.100–107,1968.
    [6] F.Endres,J.Hess,N.Engelhard,J.Sturm,D.Cremers,andW.Burgard,“Anevaluationofthergb-dslamsystem,”in2012IEEEInternationalConferenceonRobotics andAutomation(ICRA),pp.1691–1696,May2012.
    [7] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgb-d slam systems,” in2012IEEE/RSJInternationalConference onIntelligentRobotsandSystems,pp.573–580,Oct2012.
    [8] A.S.Huang,A.Bachrach,P.Henry,M.Krainin,D.Maturana,D.Fox,andN.Roy, “Visual odometry and mapping for autonomous flight using an rgb-d camera,” in InternationalSymposiumonRoboticsResearch(ISRR),vol.2,2011.
    [9] A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturana, D.Fox,andN.Roy,“Estimation,planning,andmappingforautonomousflightusing anrgb-dcameraingps-deniedenvironments,”TheInternationalJournalofRobotics Research,vol.31,no.11,pp.1320–1343,2012.
    [10] V.LuiandT.Drummond,“Imagebasedoptimisationwithoutglobalconsistencyfor constant time monocular visual slam,” in 2015 IEEE International Conference on RoboticsandAutomation(ICRA),pp.5799–5806,May2015.
    35
    [11] C. Brand, M. J. Schuster, H. Hirschmüller, and M. Suppa, “Stereo-vision based obstacle mapping for indoor/outdoor slam,” in 2014 IEEE/RSJ International ConferenceonIntelligentRobotsandSystems,pp.1846–1853,Sept2014.
    [12] A.J.Davison,I.D.Reid,N.D.Molton,andO.Stasse,“Monoslam: Real-timesingle camera slam,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.29,pp.1052–1067,June2007.
    [13] L.Matthies,R.Szeliski,andT.Kanade,“Incrementalestimationofdensedepthmaps from image sequences,” in Proceedings CVPR ’88., Computer Society Conference onComputerVisionandPatternRecognition,1988.,pp.366–374,Jun1988.
    [14] A. Wendel, M. Maurer, G. Graber, T. Pock, and H. Bischof, “Dense reconstruction on-the-fly,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2012,pp.1450–1457,June2012.
    [15] J. Engel, T. Schöps, and D. Cremers, Computer Vision – ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part II, ch. LSD-SLAM: Large-Scale Direct Monocular SLAM, pp. 834–849. Cham: SpringerInternationalPublishing,2014.
    [16] J. Engel, J. Stückler, and D. Cremers, “Large-scale direct slam with stereo cameras,”in2015IEEE/RSJInternationalConferenceonIntelligentRobotsandSystems (IROS),pp.1935–1942,Sept2015.
    [17] S. Grzonka, G. Grisetti, and W. Burgard, “A fully autonomous indoor quadrotor,” IEEETransactionsonRobotics,vol.28,pp.90–100,Feb2012.
    [18] Y. Luo, M. J. Er, L. L. Yong, and C. J. Chien, “Intelligent control and navigation ofanindoorquad-copter,”in13thInternationalConferenceonControlAutomation RoboticsVision(ICARCV),2014,pp.1700–1705,Dec2014.
    [19] K.Celik,S.J.Chung,M.Clausman,andA.K.Somani,“Monocularvisionslamfor indoor aerial vehicles,” in 2009 IEEE/RSJ International Conference on Intelligent RobotsandSystems,pp.1566–1573,Oct2009.
    36
    [20] J. Engel, J. Sturm, and D. Cremers, “Scale-aware navigation of a low-cost quadrocopterwithamonocularcamera,”RoboticsandAutonomousSystems,vol.62,no.11, pp.1646–1656,2014. SpecialIssueonVisualControlofMobileRobots.
    [21] L. V. Santana, A. S. Brandão, and M. Sarcinelli-Filho, “Outdoor waypoint navigation with the ar.drone quadrotor,” in Unmanned Aircraft Systems (ICUAS), 2015 InternationalConferenceon,pp.303–311,June2015.
    [22] K.Bipin,V.Duggal,andK.M.Krishna,“Autonomousnavigationofgenericmonocularquadcopterinnaturalenvironment,”in2015IEEEInternationalConferenceon RoboticsandAutomation(ICRA),pp.1063–1070,May2015.
    [23] F. Fraundorfer, L. Heng, D. Honegger, G. H. Lee, L. Meier, P. Tanskanen, and M. Pollefeys, “Vision-based autonomous mapping and exploration using a quadrotor mav,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,pp.4557–4564,Oct2012.
    [24] P. Bourke, “Automated 3d model reconstruction from photographs,” in 2014 InternationalConferenceonVirtualSystemsMultimedia(VSMM),pp.21–24,Dec2014.
    [25] P. Gao, X. Guo, D. Yang, M. Shen, Y. Zhao, Z. Xu, Z. Fan, J. Yu, and Y. Ma, “3d model reconstruction from video recorded with a compact camera,” in 2012 9thInternationalConferenceonFuzzySystemsandKnowledgeDiscovery(FSKD), pp.2528–2532,May2012.
    [26] M. kae Hor, K.-H. Chan, and C. Y. Tang, “3d model reconstruction refinement from multiple images,” in 2011 International Conference on Multimedia Technology(ICMT),pp.5681–5684,July2011.
    [27] G.Wang,Y.Lu,L.Zhang,A.Alfarrarjeh,R.Zimmermann,S.H.Kim,andC.Shahabi, “Active key frame selection for 3d model reconstruction from crowdsourced geo-tagged videos,” in 2014 IEEE International Conference on Multimedia and Expo(ICME),pp.1–6,July2014.
    [28] C.Anderson,“3driris+officialwebsite.”https://3dr.com/kb/iris/.
    37
    [29] R. P. Foundation, “Raspberry pi official website.” https://www.raspberrypi. org/.
    [30] V. Hodge and J. Austin, “A survey of outlier detection methodologies,” Artificial IntelligenceReview,vol.22,pp.85–126,October2004.
    [31] S.Thrun,“Learningoccupancygridmapswithforwardsensormodels,”Autonomous Robots,vol.15,no.2,pp.111–127,2003.
    [32] Agisoft,“Agisoftphotoscanofficialwebsite.”http://www.agisoft.com/.

    無法下載圖示 全文公開日期 2021/08/23 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE