簡易檢索 / 詳目顯示

研究生: 梁中彝
Chung-Yi Liang
論文名稱: 創新無人機自拍系統和技術實現
Innovative Drone Selfie System and Implementation
指導教授: 林其禹
Chyi-Yeu Lin
口試委員: 林其禹
Chyi-Yeu Lin
郭重顯
Chung-Hsien Kuo
邱士軒
Shih-Hsuan Chiu
陳金聖
Chin-Sheng Chen
宋開泰
Kai-Tai Song
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 英文
論文頁數: 43
中文關鍵詞: 無人機自拍影像伺服照片範本全自主控制
外文關鍵詞: drone selfie, visual servoing, photo template, autonomous operation
相關次數: 點閱:274下載:14
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文實現了一個創新全自主自拍無人機系統。本系統之自拍功能從先在程式內選出一張自拍照模板開始,無人機將會飛到特定位置與角度去拍出與模版相同效果的照片。當中拍照效果的參數包含拍照人物對象的位置、大小,相對於相機的左右角度與相機俯仰角。本研究研發出一套影像處理技術去偵測無人機即時回傳的影像中拍照對象的拍照效果參數,與影像伺服控制系統來指引無人機飛行到可拍出相同效果相片的位置。透過多次實驗,本創新無人機系統已驗證其可行性。


    In this thesis, an innovative and world-first system for the drone to conduct the assigned selfie-mission in a fully autonomous manner was implemented. The selfie mission will start at the selection of selfie-photo templates by the user on the APP, and after that the drone will fly to its required positions to shoot the selfie-photos to match the same effects as defined in the selected templates. The effect parameters comprise the position and the size of the body, and the pan and tilt angles of the body in the photo. This research developed image processing techniques to detect the values of parameters on the current image took by the drone, and a visual servoing-based control system to guide the drone to fly to where for it to shoot the matching photos. The innovative drone selfie system has been proven effective in a number of experiments.

    Abstract 摘要 Content List of Figures List of Tables Chapter 1 Introduction 1.1 Relative works 1.2 Overview of the thesis Chapter 2:Background 2.1 Autonomous Photographer Robot: Luke 2.2 Automatically Available Photographer Robot 2.3 Autonomous Quadcopter Videographer Chapter 3 System Architecture Chapter 4 Proposed Method 4.1 Face Detection 4.1.1 Haar Cascade Classifier 4.2 Body Proportion 4.3 Body Part Segmentation 4.3.1 Grabcut algorithm 4.4 Drone Flying Control 4.4.1 Spherical coordinate system Chapter 5 Experiments and Results 5.1 Single Person Template 5.2 Multiple People Template 5.3 Multi-templates with Single Person Chapter 6 Conclusions and Future works 6.1 Conclusions 6.2 Future works 6.2.1 Face recognition 6.2.2 Human skeleton detection Reference

    [1] DJI Mavic Pro Drone, http://store.dji.com/zh-tw/product/mavic-pro.
    [2] ZeroTech Dobby, http://www.zerotech.com/cn/dobby.html.
    [3] Hover Camera Passport, https://gethover.com/hover-camera-passport?d=pc.
    [4] M. Zabarauskas and S. Cameron, “Luke: An autonomous robot photographer,” 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May – 7 June, pp. 1809–1815, 2014.
    [5] M. J. Kim, T. H. Song, S. H. Jin, S. M. Jung, G. H. Go, K. H. Kwon and J. W. Jeon, “Automatically available photographer robot for controlling composition and taking pictures,” The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18-22, October 2010.
    [6] Q. R. Coaguila, “Autonomous Quadcopter Videographer,” University of Central Florida, U.S.A., 2015.
    [7] P. Viola, and M. Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features,” Accepted Conference on Computer Vision and Pattern Recognition, 2001.
    [8] M. Sapienza and K. P. Camilleri, 2014, “Fasthpe: A recipe for quick head pose estimation,” University of Malta, Malta. Retrieved from http://www.um.edu.mt/eng/sce.
    [9] B. Akrout and W. Mahdi, “Vision Based Approach for Driver Drowsiness Detection Based on 3D Head Orientation,” Multimedia and Ubiquitous Engineering, Lecture Notes in Electrical Engineering vol. 240, pp. 43-50, 2013.
    [10] T. Jin, H. Jia, W. Hou, and Y. Fujii, “A Measurement of Human Body Rotation During Parabolic Flight Experiment,” Aircraft Engineering and Aerospace Technology: An International Journal, Vol. 87 Issue: 1, pp.79-82, 2015.
    [11] L. Rybok, M. Voit, and H. K. Ekenel, “Multi-view Based Estimation of Human Upper-Body Orientation,” International Conference on Pattern Recognition, Istanbul, Turkey, 23-26, August 2010.
    [12] N. Date, H. Yoshimoto, D. Arita, S. Yonemoto and R. Taniguchi, “Performance Evaluation of Vision-based Real-time Motion Capture,” Parallel and Distributed Processing Symposium, 2003. Proceedings. International, 22-26, April 2003.
    [13] M. Li, T. Yang, R. Xi, and Z. Lin, “Silhouette-based 2D Human Pose Estimation,” 2009 Fifth International Conference on Image and Graphics, pp. 143-148, 2009.
    [14] C. Rother, V. Kolmogrov, and A. Blake, “GrabCut - Interactive Foreground Extraction using Iterated Graph Cuts,” ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2004, vol. 23, issue. 3, pp. 309-314, New York, NY, USA, August 2004.
    [15] Hernández-Vela, M. Reyes, V. Ponce, and S. Escalera, “GrabCut-Based Human Segmentation in Video Sequences,” Sensors 2012, vol. 12, issue 11, pp. 15376-15393, 2012.
    [16] M. Eichner, M. Martin-Jimenez, A. Zisserman, and V. Ferrari, “2D Articulated Human Pose Estimation and Retrieval in (Almost) Unconstrained Still Images,” International Journal of Computer Vision, vol. 99, pp. 190- 214, 2012.
    [17] Z. Hu, H. Yan, and X. Lin, “Clothing Segmentation Using Foreground and Background Estimation Based on the Constrained Delaunay Triangulation,” Pattern Recognition, 41(5):1581-1592, May, 2008.
    [18] S. Penmetsa, F. Minhuj, A. Singh, and S.N. Omkar, “ Autonomous UAV for Suspicious Action Detection using Pictorial Human Pose Estimation and Classification, ”, Electronic Letters on Computer Vision and Image Analysis, vol. 13, col. 1, pp. 18-32, 2014.
    [19] J. H. Li, J. T. Kim, M. J. Lee, H. J. Kang, and M. J. Kim, “3D Path Tracking of Underactuated AUVs with General Form of Dynamics,” 2015 15th International Conference on Control, Automation and Systems, pp. 249-252, 13-16, October, 2015, Busan, Korea.

    QR CODE