簡易檢索 / 詳目顯示

研究生: 哈明飛
Hadziq Fabroyir
論文名稱: Applying Traveler Models to Spatial Navigation User Interfaces in Spherical-Panoramic Virtual Reality
Applying Traveler Models to Spatial Navigation User Interfaces in Spherical-Panoramic Virtual Reality
指導教授: 鄧惟中
Wei-Chung Teng
口試委員: 陳國棟
Gwo-Dong Chen
洪一平
Yi-Ping Hung
陳炳宇
Bing-Yu Chen
戴文凱
Wen-Kai Tai
賴祐吉
Yu-Chi Lai
姚智原
Chih-Yuan Yao
鄧惟中
Wei-Chung Teng
學位類別: 博士
Doctor
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 英文
論文頁數: 128
中文關鍵詞: VRVirtual RealityUser InterfaceUISpatial Behavior
外文關鍵詞: VR, Virtual Reality, User Interface, UI, Spatial Behavior
相關次數: 點閱:267下載:16
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • There are times when people face navigation problems in the real world,
    and there will be times when the same people face similar problems in virtual
    reality (VR). As VR is gaining popularity and is ready for mass-market
    adoption, it is important to research ways to present spatial navigation user
    interfaces (UIs) in VR that can benefit all types of users—not only experts
    but also novice users across genders. The research can begin from the real world
    point of view because basically, the way users navigate the real world
    can be incorporated into VR navigation.

    In this research, traveler models were proposed as an interaction paradigm
    to enhance the user experience in VR, especially in spherical-panoramic
    touring systems. The models employed the metaphor of travelers on street,
    navigating their surroundings while holding a paper map in their hands.
    Based on this metaphor, the models emphasized three important characteristics:
    (1) two separate displays (i.e., allocentric and egocentric views),
    (2) immersion in the egocentric view, and (3) interaction techniques based
    on user motions in the real world. Consequently, the models were used
    to generate three different kinds of prototypes or proofs of concept. To
    accommodate separate allocentric and egocentric views, prototype 1 utilized
    dual projector displays and a skeletal tracking sensor, prototype 2
    employed a curved display and a multitouch tablet, and prototype 3 used a
    head-mounted display and one of two handheld controllers: a multitouch
    tablet or a gamepad.

    Through a series of experiments, the usability of the UIs in all these
    prototypes was then evaluated. The results showed that the proposed prototypes
    provided spatial cognition and user experiences better than those
    of their legacy system counterparts. User performance and preferences
    were further investigated in prototypes 2 and 3. The investigation of prototype
    2 focused on the comparison of pointing and gestural UIs (e.g.,
    mouse and multitouch device) for spatial navigation in desktop VR systems.
    Moreover, the investigation of prototype 3 concentrated on comparing
    the finger gestures on multitouch and tangible UIs (e.g., multitouch device
    and gamepad thumbsticks) for spatial navigation in head-mounted display
    (HMD) VR systems. In summary, the users preferred and performed
    better on spatial navigation with the gestural UIs, especially when the UIs
    were tangible.

    In addition, spatial behaviors were also observed and analyzed, especially
    for prototype 3. Results showed that the users preferred to apply
    egocentric techniques to orient and move within VR. The results also
    demonstrated that the users performed tasks faster and were less prone to
    errors while using gamepad thumbsticks, which manifested egocentric navigation.
    Results from workload measurements with the NASA-TLX and
    a brain-computer interface showed the gestures on the tangible UI (e.g.,
    gamepad thumbsticks) to be superior to the gestures on the multitouch device.
    The relationships among spatial behaviors, gender, video gaming experience,
    and user interfaces in VR navigation were also examined. It was
    found that female users tended to navigate the VR allocentrically, while
    male users were likely to navigate the VR egocentrically, especially when
    using a tangible UI such as gamepad thumbsticks.


    There are times when people face navigation problems in the real world,
    and there will be times when the same people face similar problems in virtual
    reality (VR). As VR is gaining popularity and is ready for mass-market
    adoption, it is important to research ways to present spatial navigation user
    interfaces (UIs) in VR that can benefit all types of users—not only experts
    but also novice users across genders. The research can begin from the real world
    point of view because basically, the way users navigate the real world
    can be incorporated into VR navigation.

    In this research, traveler models were proposed as an interaction paradigm
    to enhance the user experience in VR, especially in spherical-panoramic
    touring systems. The models employed the metaphor of travelers on street,
    navigating their surroundings while holding a paper map in their hands.
    Based on this metaphor, the models emphasized three important characteristics:
    (1) two separate displays (i.e., allocentric and egocentric views),
    (2) immersion in the egocentric view, and (3) interaction techniques based
    on user motions in the real world. Consequently, the models were used
    to generate three different kinds of prototypes or proofs of concept. To
    accommodate separate allocentric and egocentric views, prototype 1 utilized
    dual projector displays and a skeletal tracking sensor, prototype 2
    employed a curved display and a multitouch tablet, and prototype 3 used a
    head-mounted display and one of two handheld controllers: a multitouch
    tablet or a gamepad.

    Through a series of experiments, the usability of the UIs in all these
    prototypes was then evaluated. The results showed that the proposed prototypes
    provided spatial cognition and user experiences better than those
    of their legacy system counterparts. User performance and preferences
    were further investigated in prototypes 2 and 3. The investigation of prototype
    2 focused on the comparison of pointing and gestural UIs (e.g.,
    mouse and multitouch device) for spatial navigation in desktop VR systems.
    Moreover, the investigation of prototype 3 concentrated on comparing
    the finger gestures on multitouch and tangible UIs (e.g., multitouch device
    and gamepad thumbsticks) for spatial navigation in head-mounted display
    (HMD) VR systems. In summary, the users preferred and performed
    better on spatial navigation with the gestural UIs, especially when the UIs
    were tangible.

    In addition, spatial behaviors were also observed and analyzed, especially
    for prototype 3. Results showed that the users preferred to apply
    egocentric techniques to orient and move within VR. The results also
    demonstrated that the users performed tasks faster and were less prone to
    errors while using gamepad thumbsticks, which manifested egocentric navigation.
    Results from workload measurements with the NASA-TLX and
    a brain-computer interface showed the gestures on the tangible UI (e.g.,
    gamepad thumbsticks) to be superior to the gestures on the multitouch device.
    The relationships among spatial behaviors, gender, video gaming experience,
    and user interfaces in VR navigation were also examined. It was
    found that female users tended to navigate the VR allocentrically, while
    male users were likely to navigate the VR egocentrically, especially when
    using a tangible UI such as gamepad thumbsticks.

    Recommendation Letter . . . . . . . . . . . . . . . . . . . . . . . . i Approval Letter . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . v Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii List of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Research Questions . . . . . . . . . . . . . . . . . . . . . 2 1.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . 4 1.5 Dissertation Organization . . . . . . . . . . . . . . . . . . 6 2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Overview of Virtual Reality . . . . . . . . . . . . . . . . . 7 2.2 Navigation in Virtual Reality . . . . . . . . . . . . . . . . 8 2.2.1 Navigation and Spatial Representation . . . . . . . 8 2.2.2 Spatial Cognition in Virtual Reality . . . . . . . . 9 2.2.3 The Problem of Mental Rotation . . . . . . . . . . 10 2.2.4 The Importance of Maps . . . . . . . . . . . . . . 10 2.3 User Interfaces in Virtual Reality . . . . . . . . . . . . . . 11 2.3.1 Natural User Interfaces . . . . . . . . . . . . . . . 11 2.3.2 Interface Fidelity . . . . . . . . . . . . . . . . . . 12 2.3.3 The Factors of Display . . . . . . . . . . . . . . . 13 2.3.4 Touring Applications . . . . . . . . . . . . . . . . 14 3 Design and Implementation . . . . . . . . . . . . . . . . . . . . 16 3.1 Traveler Models . . . . . . . . . . . . . . . . . . . . . . . 16 3.1.1 Addressing Mental Rotation Issues . . . . . . . . 17 3.1.2 Addressing Immersion . . . . . . . . . . . . . . . 18 3.1.3 Addressing Interaction . . . . . . . . . . . . . . . 19 3.2 Prototype 1 . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2.1 Metaphor . . . . . . . . . . . . . . . . . . . . . . 20 3.2.2 Implementation . . . . . . . . . . . . . . . . . . . 20 3.3 Prototype 2 . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.3.1 Metaphor . . . . . . . . . . . . . . . . . . . . . . 27 3.3.2 Implementation . . . . . . . . . . . . . . . . . . . 28 3.4 Prototype 3 . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.4.1 Metaphor . . . . . . . . . . . . . . . . . . . . . . 32 3.4.2 Implementation . . . . . . . . . . . . . . . . . . . 33 4 Evaluation of User Interface . . . . . . . . . . . . . . . . . . . 38 4.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2 Task and Apparatus . . . . . . . . . . . . . . . . . . . . . 38 4.3 Measurements . . . . . . . . . . . . . . . . . . . . . . . . 43 4.4 Experiment Design and Procedure . . . . . . . . . . . . . 45 4.5 Experimental Results . . . . . . . . . . . . . . . . . . . . 47 4.5.1 Preliminary: Pointing vs. Skeletal Gesture UIs . . 47 4.5.2 Evaluation 1: Pointing vs. Hand Gesture UIs . . . 48 4.5.3 Evaluation 2: Tangible vs. Multitouch UIs . . . . 51 4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.6.1 Sacrificing Performance for Natural Interaction . . 63 4.6.2 Larger Display for Better Cognition and Performance 63 4.6.3 Gender Issues on Spatial Navigation User Interfaces 64 5 Evaluation of Spatial Behavior . . . . . . . . . . . . . . . . . . 72 5.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.2 Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.3 Measurements . . . . . . . . . . . . . . . . . . . . . . . . 73 5.4 Experiment Design and Procedure . . . . . . . . . . . . . 74 5.5 Experimental Results . . . . . . . . . . . . . . . . . . . . 76 5.5.1 Motion Preferences . . . . . . . . . . . . . . . . . 76 5.5.2 Correlation on Navigation Behaviors . . . . . . . 78 5.5.3 Task Performance . . . . . . . . . . . . . . . . . . 82 5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.6.1 Performance Across Behavior and Gender . . . . . 83 5.6.2 Effects of Viewpoint Design . . . . . . . . . . . . 84 5.6.3 Factors Influencing Navigation Behaviors . . . . . 86 5.6.4 Switching Between Spatial Behaviors . . . . . . . 87 6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 90 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Appendix 1: Question Sheet for Evaluation 1 . . . . . . . . . . . . 99 Appendix 2: Pre-experiment Questionnaire for Evaluation 2 . . . . 103 Appendix 3: Post-experiment Questionnaire for Evaluation 2 . . . . 109 Letter of Authority . . . . . . . . . . . . . . . . . . . . . . . . . . 113

    [1] R. P. Darken and J. L. Sibert, “Navigating large virtual spaces,” International Journal of Human-
    Computer Interaction, vol. 8, pp. 49–71, jan 1996.
    [2] D. Wenig and R. Malaka, “Interaction with combinations of maps and images for pedestrian navigation
    and virtual exploration,” in Proceedings of the 12th international conference on Human computer
    interaction with mobile devices and services - MobileHCI ’10, (New York, New York, USA), p. 377,
    ACM Press, 2010.
    [3] Google, “Google Street View,” 2007.
    [4] L. Vincent, “Taking Online Maps Down to Street Level,” Computer, vol. 40, pp. 118–120, dec 2007.
    [5] D. Anguelov, C. Dulong, D. Filip, C. Frueh, S. Lafon, R. Lyon, A. Ogale, L. Vincent, and J. Weaver,
    “Google Street View: Capturing the World at Street Level,” Computer, vol. 43, pp. 32–38, jun 2010.
    [6] P. E. Napieralski, B. M. Altenhoff, J. W. Bertrand, L. O. Long, S. V. Babu, C. C. Pagano, and T. A.
    Davis, “An evaluation of immersive viewing on spatial knowledge acquisition in spherical panoramic
    environments,” Virtual Reality, vol. 18, pp. 189–201, sep 2014.
    [7] “Google Maps,” 2005. https://maps.google.com.
    [8] “Streetside,” 2009. http://www.microsoft.com/maps/streetside.aspx.
    [9] “Bing Maps,” 2009. http://www.bing.com/maps/.
    [10] A. Parush and D. Berman, “Navigation and orientation in 3D user interfaces: the impact of navigation
    aids and landmarks,” International Journal of Human-Computer Studies, vol. 61, pp. 375–395, sep
    2004.
    [11] H.-K. Kim, T.-S. Song, Y.-C. Choy, and S.-B. Lim, “3D Virtual Environment Navigation Aid Techniques
    for Novice Users Using Topic Map,” IEICE transactions on information and systems, vol. E89-
    D, no. 8, pp. 2411–2419, 2006.
    [12] S. Münzer, H. D. Zimmer, M. Schwalm, J. Baus, and I. Aslan, “Computer-assisted navigation and the
    acquisition of route and survey knowledge,” Journal of Environmental Psychology, vol. 26, pp. 300–
    308, dec 2006.
    [13] B. Chen, B. Neubert, E. Ofek, O. Deussen, and M. F. Cohen, “Integrated videos and maps for driving
    directions,” in Proceedings of the 22nd annual ACM symposium on User interface software and
    technology - UIST ’09, (New York, New York, USA), p. 223, ACM Press, 2009.
    [14] M. N. Kamel Boulos, B. J. Blanchard, C. Walker, J. Montero, A. Tripathy, and R. Gutierrez-Osuna,
    “Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation,” International
    Journal of Health Geographics, vol. 10, p. 45, jan 2011.
    [15] K.-H. Kim and K.-Y. Wohn, “Effects on Productivity and Safety of Map and Augmented Reality Navigation
    Paradigms,” IEICE transactions on information and systems, vol. E94-D, no. 5, pp. 1051–1061,
    2011.
    [16] J. Kim, J. Park, H. Kim, and C. Lee, “HCI(Human Computer Interaction) Using Multi-touch Tabletop
    Display,” in 2007 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing,
    pp. 391–394, IEEE, aug 2007.
    [17] A. J. Aretz and C. D. Wickens, “The Mental Rotation of Map Displays,” Human Performance, vol. 5,
    pp. 303–328, dec 1992.
    [18] R. Darken and H. Cevik, “Map usage in virtual environments: orientation issues,” in Proceedings
    IEEE Virtual Reality (Cat. No. 99CB36316), pp. 133–140, IEEE Comput. Soc, 1999.
    [19] F. Hermann, G. Bieber, and A. Duesterhoeft, “Egocentric Maps on Mobile Devices,” in Proceedings
    of the 4th International Workshop on Mobile Computing, pp. 32–37, 2003.
    [20] A. Dix, J. E. Finlay, G. D. Abowd, and R. Beale, Human-Computer Interaction (3rd Edition). Prentice-
    Hall, Inc., dec 2003.
    [21] R. a. Ruddle, E. Volkova, and H. H. Bülthoff, “Walking improves your cognitive map in environments
    that are large-scale and large in extent,” ACM Transactions on Computer-Human Interaction, vol. 18,
    pp. 1–20, jun 2011.
    [22] R. Darken and B. Peterson, “Spatial Orientation, Wayfinding, and Representation,” in Handbook of
    Virtual Environments: Design, Implementation, and Applications (K. S. Hale and K. M. Stanney, eds.),
    ch. 19, pp. 467–491, CRC Press, 2 ed., sep 2014.
    [23] H. Martins and R. Ventura, “Immersive 3-D teleoperation of a search and rescue robot using a headmounted
    display,” in 2009 IEEE Conference on Emerging Technologies & Factory Automation, pp. 1–
    8, IEEE, sep 2009.
    [24] L. P. Berg and J. M. Vance, “Industry use of virtual reality in product design and manufacturing: a
    survey,” Virtual Reality, vol. 21, pp. 1–17, mar 2017.
    [25] D. Waller, E. Hunt, and D. Knapp, “The Transfer of Spatial Knowledge in Virtual Environment Training,”
    Presence: Teleoperators and Virtual Environments, vol. 7, pp. 129–143, apr 1998.
    [26] D. A. Bowman, E. Kruijff, J. J. LaViola, and I. Poupyrev, 3D User Interfaces: Theory and Practice.
    Redwood City, CA, USA: Addison Wesley Longman Publishing Co., Inc., 2004.
    [27] R. L. Klatzky, “Allocentric and Egocentric Spatial Representations: Definitions, Distinctions, and Interconnections,”
    in Spatial cognition - An interdisciplinary approach to representation and processing
    of spatial knowledge, no. September 1997, pp. 1–17, 1998.
    [28] M. Kozhevnikov, M. A. Motes, B. Rasch, and O. Blajenkova, “Perspective-taking vs. mental rotation
    transformations and how they predict spatial navigation performance,” Applied Cognitive Psychology,
    vol. 20, pp. 397–417, apr 2006.
    [29] S. Münzer and M. V. Zadeh, “Acquisition of spatial knowledge through self-directed interaction with
    a virtual model of a multi-level building: Effects of training and individual differences,” Computers
    in Human Behavior, vol. 64, pp. 191–205, nov 2016.
    [30] H. Fabroyir and W.-C. Teng, “Navigation in virtual environments using head-mounted displays: Allocentric
    vs. egocentric behaviors,” Computers in Human Behavior, vol. 80, pp. 331–343, nov 2017.
    [31] S. C. Hirtle and S. Srinivas, “Enriching Spatial Knowledge through a Multiattribute Locational System,”
    in Spatial Cognition VII, pp. 279–288, Springer, 2010.
    [32] R. N. Shepard and J. Metzler, “Mental Rotation of Three-Dimensional Objects,” Science, vol. 171,
    pp. 701–703, feb 1971.
    [33] C. A. Lawton, “Gender differences in way-finding strategies: Relationship to spatial ability and spatial
    anxiety,” Sex Roles, vol. 30, pp. 765–779, jun 1994.
    [34] D. M. Saucier, S. M. Green, J. Leason, A. MacFadden, S. Bell, and L. J. Elias, “Are sex differences
    in navigation caused by sexually dimorphic strategies or by differences in the ability to use the strategies?,”
    Behavioral Neuroscience, vol. 116, no. 3, pp. 403–410, 2002.
    [35] C. A. Lawton, “Gender, Spatial Abilities, and Wayfinding,” in Handbook of Gender Research in Psychology
    (J. C. Chrisler and D. R. McCreary, eds.), pp. 317–341, New York, NY: Springer New York,
    2010.
    [36] L. Castelli, L. Latini Corazzini, and G. C. Geminiani, “Spatial navigation in large-scale virtual environments:
    Gender differences in survey tasks,” Computers in Human Behavior, vol. 24, pp. 1643–1667,
    jul 2008.
    [37] E. C. Merrill, Y. Yang, B. Roskos, and S. Steele, “Sex Differences in Using Spatial and Verbal Abilities
    Influence Route Learning Performance in a Virtual Environment: A Comparison of 6- to 12-Year Old
    Boys and Girls,” Frontiers in Psychology, vol. 7, p. 258, feb 2016.
    [38] W. Wen, T. Ishikawa, and T. Sato, “Individual Differences in the Encoding Processes of Egocentric
    and Allocentric Survey Knowledge,” Cognitive Science, vol. 37, pp. 176–192, jan 2013.
    [39] D. Stone, C. Jarrett, M. Woodroffe, and S. Minocha, User Interface Design and Evaluation. San
    Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2005.
    [40] D. Wigdor and D. Wixon, Brave NUI World: Designing Natural User Interfaces for Touch and Gesture.
    San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1st ed., 2011.
    [41] D. a. Norman, “The way I see it: Natural user interfaces are not natural,” interactions, vol. 17, p. 6,
    may 2010.
    [42] R. P. McMahan, Exploring the Effects of Higher-Fidelity Display and Interaction for Virtual Reality
    Games. Dissertation, Virginia Tech, 2011.
    [43] R. P. McMahan, D. A. Bowman, D. J. Zielinski, and R. B. Brady, “Evaluating Display Fidelity and
    Interaction Fidelity in a Virtual Reality Game,” IEEE Transactions on Visualization and Computer
    Graphics, vol. 18, pp. 626–633, apr 2012.
    [44] R. P. McMahan, C. Lai, and S. K. Pal, “Interaction Fidelity: The Uncanny Valley of Virtual Reality
    Interactions,” in Virtual, Augmented and Mixed Reality. VAMR 2016. Lecture Notes in Computer
    Science (S. Lackey and R. Shumaker, eds.), vol. 9740, pp. 59–70, Springer, 2016.
    [45] M. Nabiyouni, A. Saktheeswaran, D. A. Bowman, and A. Karanth, “Comparing the performance of
    natural, semi-natural, and non-natural locomotion techniques in virtual reality,” in 2015 IEEE Symposium
    on 3D User Interfaces (3DUI), pp. 3–10, IEEE, mar 2015.
    [46] J.-F. Lapointe, P. Savard, and N. Vinson, “A comparative study of four input devices for desktop virtual
    walkthroughs,” Computers in Human Behavior, vol. 27, pp. 2186–2191, nov 2011.
    [47] K. Murias, K. Kwok, A. G. Castillejo, I. Liu, and G. Iaria, “The effects of video game use on performance
    in a virtual navigation task,” Computers in Human Behavior, vol. 58, pp. 398–406, may
    2016.
    [48] T. T. Brunyé, A. Gardony, C. R. Mahoney, and H. A. Taylor, “Going to town: Visualized perspectives
    and navigation through virtual environments,” Computers in Human Behavior, vol. 28, pp. 257–266,
    jan 2012.
    [49] A. E. Richardson and M. L. Collaer, “Virtual Navigation Performance: The Relationship to Field of
    View and Prior Video Gaming Experience,” Perceptual and Motor Skills, vol. 112, pp. 477–498, apr
    2011.
    [50] S. Walkowiak, T. Foulsham, and A. F. Eardley, “Individual differences and personality correlates of
    navigational performance in the virtual route learning task,” Computers in Human Behavior, vol. 45,
    pp. 402–410, apr 2015.
    [51] A. E. Richardson, M. E. Powers, and L. G. Bousquet, “Video game experience predicts virtual, but
    not real navigation performance,” Computers in Human Behavior, vol. 27, pp. 552–560, jan 2011.
    [52] S. P. Henriksen and T. Midtbø, “Investigation of Map Orientation by the Use of Low-Cost Virtual
    Reality Equipment,” in Cartography - Maps Connecting the World (C. Robbi Sluter, C. B. Madureira
    Cruz, and P. M. Leal de Menezes, eds.), Lecture Notes in Geoinformation and Cartography, pp. 75–88,
    Cham: Springer International Publishing, 2015.
    [53] Google, “Google Earth,” 2006.
    [54] M. Piovesana, Y.-J. Chen, N.-H. Yu, H.-T. Wu, L.-W. Chan, and Y.-P. Hung, “Multi-display map
    touring with tangible widget,” in Proceedings of the international conference on Multimedia - MM
    ’10, (New York, New York, USA), p. 679, ACM Press, 2010.
    [55] “Earthwalk,” 2006. https://vimeo.com/4855394.
    [56] “Liquid Galaxy,” 2008. https://en.wikipedia.org/wiki/Liquid_Galaxy.
    [57] Z. Zhang, “Microsoft Kinect Sensor and Its Effect,” IEEE Multimedia, vol. 19, pp. 4–10, feb 2012.
    [58] R. B. Allen, “Mental Models and User Models,” in Handbook of human-computer interaction (M. Helander,
    T. Landauer, and P. Prabhu, eds.), vol. 1, pp. 49—-63, North Holland, 1997.
    [59] F. Halasz and T. P. Moran, “Analogy considered harmful,” in Proceedings of the 1982 conference on
    Human factors in computing systems - CHI ’82, no. 2, (New York, New York, USA), pp. 383–386,
    ACM Press, 1982.
    [60] H. Fabroyir, W.-c. Teng, S.-l. Wang, and R. Y. Tara, “MapXplorer Handy: An Immersive Map Exploration
    System Using Handheld Device,” in 2013 International Conference on Cyberworlds, (Yokohama,
    Japan), pp. 101–107, IEEE, oct 2013.
    [61] H. Fabroyir, W.-C. Teng, and Y.-C. Lin, “An Immersive and Interactive Map Touring System Based
    on Traveler Conceptual Models,” IEICE Transactions on Information and Systems, vol. E97.D, no. 8,
    pp. 1983–1990, 2014.
    [62] D. S. Tan, D. Gergle, P. Scupelli, and R. Pausch, “With similar visual angles, larger displays improve
    spatial performance,” in Proceedings of the conference on Human factors in computing systems - CHI
    ’03, no. 5, (New York, New York, USA), p. 217, ACM Press, 2003.
    [63] M. Czerwinski, D. S. Tan, and G. G. Robertson, “Women take a wider view,” in Proceedings of the
    SIGCHI conference on Human factors in computing systems Changing our world, changing ourselves
    - CHI ’02, no. 4, (New York, New York, USA), p. 195, ACM Press, 2002.
    [64] L. Shupp, C. Andrews, M. Dickey-Kurdziolek, B. Yost, and C. North, “Shaping the Display of the Future:
    The Effects of Display Size and Curvature on User Performance and Insights,” Human–Computer
    Interaction, vol. 24, pp. 230–272, apr 2009.
    [65] D. S. Tan, R. Pausch, J. K. Stefanucci, and D. R. Proffitt, “Kinesthetic cues aid spatial memory,” in
    CHI ’02 extended abstracts on Human factors in computing systems - CHI ’02, (New York, New York,
    USA), p. 806, ACM Press, 2002.
    [66] J. O. Wobbrock, M. R. Morris, and A. D. Wilson, “User-defined gestures for surface computing,” in
    Proceedings of the 27th international conference on Human factors in computing systems - CHI 09,
    (New York, New York, USA), p. 1083, ACM Press, 2009.
    [67] Google, “Google Maps API,” 2005.
    [68] “Kinect for Windows SDK,” 2011. http://www.microsoft.com/en-us/kinectforwindows/
    develop/overview.aspx.
    [69] A. Dippon and G. Klinker, “KinectTouch: accuracy test for a very low-cost 2.5 D multitouch tracking
    system,” in Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
    - ITS ’11, vol. 1, (New York, New York, USA), p. 49, ACM Press, 2011.
    [70] S. Battersby, “User-centered design for digital map navigation tools,” in Proceedings of The 17th
    International Research Symposium on Computer-based Cartography, (Shepherdstown, West Virginia,
    USA), pp. 1–15, Cartography and Geographic Information Society, 2008.
    [71] Apple, “Bonjour,” 2002.
    [72] G. Hart, Sandra, “NASA-task load index (NASA-TLX); 20 years later,” Human Factors and Ergonomics
    Society Annual Meting, pp. 904–908, 2006.
    [73] M. E. Smith, A. Gevins, H. Brown, A. Karnik, and R. Du, “Monitoring Task Loading with Multivariate
    EEG Measures during Complex Forms of Human-Computer Interaction,” Human Factors: The
    Journal of the Human Factors and Ergonomics Society, vol. 43, pp. 366–380, sep 2001.
    [74] K. Iwanaga, S. Saito, Y. Shimomura, H. Harada, and T. Katsuura, “The Effect of Mental Loads on
    Muscle Tension, Blood Pressure and Blink Rate.,” Journal of PHYSIOLOGICAL ANTHROPOLOGY
    and Applied Human Science, vol. 19, no. 3, pp. 135–141, 2000.
    [75] B. Zheng, X. Jiang, G. Tien, A. Meneghetti, O. N. M. Panton, and M. S. Atkins, “Workload assessment
    of surgeons: correlation between NASA TLX and blinks,” Surgical Endoscopy, vol. 26, pp. 2746–
    2750, oct 2012.
    [76] M. Rosenfield, S. Jahan, K. Nunez, and K. Chan, “Cognitive demand, digital screens and blink rate,”
    Computers in Human Behavior, vol. 51, pp. 403–406, oct 2015.
    [77] V. Faure, R. Lobjois, and N. Benguigui, “The effects of driving environment complexity and dual
    tasking on drivers’ mental workload and eye blink behavior,” Transportation Research Part F: Traffic
    Psychology and Behaviour, vol. 40, pp. 78–90, jul 2016.
    [78] E. Rendon-Velez, P. M. . van Leeuwen, R. Happee, I. Horváth, W. van der Vegte, and J. de Winter,
    “The effects of time pressure on driver performance and physiological activity: A driving simulator
    study,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 41, pp. 150–169, aug
    2016.
    [79] J. Veltman and A. Gaillard, “Physiological indices of workload in a simulated flight task,” Biological
    Psychology, vol. 42, pp. 323–342, feb 1996.
    [80] D.-H. Huang, S.-W. Chou, Y.-L. Chen, and W.-K. Chiou, “Frowning and jaw clenching muscle activity
    reflects the perception of effort during incremental workload cycling.,” Journal of sports science &
    medicine, vol. 13, pp. 921–8, dec 2014.
    [81] H. M. de Morree and S. M. Marcora, “The face of effort: Frowning muscle activity reflects effort
    during a physical task,” Biological Psychology, vol. 85, pp. 377–382, dec 2010.
    [82] M. Abujelala, C. Abellanoza, A. Sharma, and F. Makedon, “Brain-EE: Brain Enjoyment Evaluation
    using Commercial EEG Headband,” in Proceedings of the 9th ACM International Conference on PErvasive
    Technologies Related to Assistive Environments - PETRA ’16, no. October, (New York, New
    York, USA), pp. 1–5, ACM Press, 2016.
    [83] G. Wiechert, M. Triff, Z. Liu, Z. Yin, S. Zhao, Z. Zhong, R. Zhaou, and P. Lingras, “Identifying
    users and activities with cognitive signal processing from a wearable headband,” in 2016 IEEE 15th
    International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC), pp. 129–136,
    IEEE, aug 2016.
    [84] Interaxon, “Available data - muse developers,” 2015.
    [85] S. A. Douglas, A. E. Kirkpatrick, and I. S. MacKenzie, “Testing pointing device performance and user
    assessment with the ISO 9241, Part 9 standard,” in Proceedings of the SIGCHI conference on Human
    factors in computing systems the CHI is the limit - CHI ’99, vol. 15, (New York, New York, USA),
    pp. 215–222, ACM Press, 1999.
    [86] D. Natapov, S. J. Castellucci, and I. S. MacKenzie, “ISO 9241-9 Evaluation of Video Game Controllers,”
    in Proceedings of the Graphics Interface Conference (GI’09), pp. 223–230, 2009.
    [87] S. S. Sawilowsky, “New Effect Size Rules of Thumb,” Journal of Modern Applied Statistical Methods,
    vol. 8, pp. 597–599, nov 2009.
    [88] D. S. Tan, D. Gergle, P. G. Scupelli, and R. Pausch, “Physically large displays improve path integration
    in 3D virtual navigation tasks,” in Proceedings of the 2004 conference on Human factors in computing
    systems - CHI ’04, vol. 6, (New York, New York, USA), pp. 439–446, ACM Press, 2004.
    [89] M. Czerwinski, G. Robertson, B. Meyers, G. Smith, D. Robbins, and D. Tan, “Large display research
    overview,” in CHI ’06 extended abstracts on Human factors in computing systems - CHI EA ’06, (New
    York, New York, USA), p. 69, ACM Press, 2006.
    [90] D. S. Tan, M. Czerwinski, and G. Robertson, “Women go with the (optical) flow,” in Proceedings
    of the conference on Human factors in computing systems - CHI ’03, no. 5, (New York, New York,
    USA), p. 209, ACM Press, 2003.

    QR CODE