簡易檢索 / 詳目顯示

研究生: Dimas Arioputra
Dimas - Arioputra
論文名稱: Mobile Augmented Reality as a Chinese Menu Translator
Mobile Augmented Reality as a Chinese Menu Translator
指導教授: 林昌鴻
Chang Hong Lin
口試委員: 陳維美
Wei Mei Chen
呂政修
Jenq Shiou Leu
林敬舜
Jing Shun Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2015
畢業學年度: 103
語文別: 英文
論文頁數: 56
外文關鍵詞: Augmented Reality, Mobile
相關次數: 點閱:134下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  • Mobile augmented reality has only been made possible in the last few years, owing to the increasing processing power, more responsive hardware sensors and improved cameras of modern mobile computers. As a result of this, the mobile app markets have started filling up with augmented reality applications, ranging from games to navigation assistants. As smartphones get more popular these days, mobile Augmented Reality (AR) application is growing, especially applications that recognize an object and augment relevant information on the smartphone’s display. In this thesis, we would like to propose a mobile augmented reality system as an application that could help assist a user.
    As a special case study, we propose an AR application for mobile phones that translates Chinese menus into a 3D model of the menu and the name of the menu in English. The system runs on an Android mobile phone platform. It uses image recognition to trace and register image markers (in this paper, it is a Chinese word). Then, it processed the input into a pre-processed feature database of menu items using the Features from Accelerated Segment Test (FAST) implemented by Vuforia Library. The system will check the database, and after the matching it will generate the 3D model of the food using the Unity 3D game engine. It is able to trace and render up to five 3D models of the dishes in real-time. And it will show the Graphical User Interface (GUI) when the 3D model is touched.

    Table of Contents Abstract i List of Figures iv List of Tables vi Chapter 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Objective and Contribution 3 1.3 Thesis Organization 4 Chapter 2 RELATED WORKS 5 2.1. Marker-based Augmented Reality 5 2.2. Marker-less Augmented Reality 7 Chapter 3 PROPOSED METHOD 16 3.1 Choosing Image Target 17 3.2 Image Detecting and Tracking 18 3.3 Pose Estimation 21 3.4 3D Object Rendering 22 3.5 Improving 3D Rendering 24 Chapter 4 EXPERIMENTAL RESULTS 30 4.1. Developing Platform 30 4.2. Environment Testing 31 4.2.1. Scale Differences 31 4.2.2. Tilt Testing 32 4.2.3. Illumination Testing 33 4.2 Resources Usage Testing 34 4.2.1. GPU Usage 34 4.2.2. CPU Usage 37 4.2.3. Memory Usage 38 4.3 Graphical User Interface 38 Chapter 5 CONCLUSIONS AND FUTURE WORKS 40 5.1 Conclusions 40 5.2 Future Works 41 REFERENCES 42 APPENDIX 44

    [1] K. Lee, “Augmented Reality in Education and Training,” Techtrends: Linking
    Research & Practice To Improve Learning, 2012.
    [2] T. Technologies, “A Taxonomy of Mixed Reality Visual Displays,” no. 12, pp. 1 –15, 2003.
    [3] Höllerer, T., Feiner, S., Terauchi, T., Rashid, G., and Hallaway, D. Exploring MARS: Developing Indoor and Outdoor User Interfaces to a Mobile Augmented Reality System. Computers and Graphics, 23 (6), Elsevier Publishers, Dec. 1999, pp. 779-785, 1999
    [4] Gabber, E. and Wool, A. How to prove where you are: tracking the location of customer equipment, Proceedings of the 5th ACM conference on Computer and communications security, pp. 142-149, 1998
    [5] D. Wagner and D. Schmalstieg, “First Steps Towards Handheld Augmented Reality,” Proc. Seventh Int’l Conf. Wearable Computers (ISWC ’03), pp. 127-135, 2003.
    [6] D. Wagner and D. Schmalstieg, “ARToolKitPlus for Pose Tracking on Mobile Devices,” Proc. 12th Computer Vision Winter Workshop (CVWW ’07), pp. 139-146, 2007.
    [7] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, November 2004, pp. 91–110.
    [8] I. Skrypnyk and D. Lowe, “Scene Modeling, Recognition and Tracking with Invariant Image Features,” Proc. Int’l Symp. Mixed and Augmented Reality (ISMAR ’04), pp. 110-119, 2004.
    [9] H. Bay, T. Tuytelaars and L.V. Gool, "SURF: Speeded up Robust Features", Proceedings of the ninth European Conference on Computer Vision, May 2006.
    [10] A. Paz, M.L. Guenaga, A. Eguiluz,”Augmented Reality for maintenance operator training using SURF points and homography”, 9th International Conference on Remote Engineering and Virtual Instrumentation (REV), 2012, pp. 1-4.
    [11] E. Rosten and T. Drummond, “Machine learning for high speed corner detection,” in 9th
    Euproean Conference on Computer Vision, vol. 1, 2006, pp. 430–443.
    [12] S. Leutenegger, M. Chli, and R. Y. Siegwart, “BRISK: Binary Robust Invariant Scalable Keypoints,” IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, November 2011, pp. 2548-2555.
    [13] B.A. Delail, L. Weruaga, and J. Zemerly, “CAViAR: Context Aware Visual indoor Augmented Reality for a University Campus” in IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, 2012, pp. 286-290.
    [14]Correa, A.G.D. ; de Assis, Gilda Aparecida; Nascimento, Marilena do; Ficheman, Irene; Lopes, Roseli de Deus,”GenVirtual: An Augmented Reality Musical Game for Cognitive and Motor Rehabilitation,” Virtual Rehabilitation, 2007, pp. 1-6.
    [15] P. Eisert, J. Rurainsky, P. Fechteler, ”Automatic text detection for mobile augmented reality translation”, IEEE International Conference on Computer Vision Workshops, 2011, pp. 48-55.
    [16] http://www.gizmag.com/world-lens-app-turns-your-phone-into-a-real-time-translator/17310/
    [17] J. Camba, M. Contero, G. Salvador-Herranz, “Desktop vs. Mobile: A Comparative Study of
    Augmented Reality Systems for Engineering Visualizations in Education”, Frontiers in Education Conference (FIE), 2014 IEEE.
    [18] L. Yi-Ting, Y. Chih-Hung, W. Cheng-Chih, “Learning Geometry with Augmented Reality to Enhance Spatial Ability”, International Conference on Learning and Teaching in Computing and Engineering (LaTiCE), 2015.
    [19] https://www.microsoft.com/typography/fonts/family.aspx?FID=350.
    [20] https://www.blender.org/
    [21] S. Taylor, E. Rosten, and T. Drummond. Robust feature matching in 2.3µs. In IEEE CVPR Workshop on Feature Detectors and Descriptors: The State Of The Art and Beyond, June 2009.

    QR CODE