簡易檢索 / 詳目顯示

研究生: Rayi Yanu Tara
Rayi Yanu Tara
論文名稱: Spatial Cue Augmentation on Tethered Viewpoint Displays with 3D Map View
Spatial Cue Augmentation on Tethered Viewpoint Displays with 3D Map View
指導教授: 鄧惟中
Wei-Chung Teng
口試委員: 陳金聖
Chin-Sheng Chen
朱宏國
Hung-Kuo Chu
邱士軒
Shih-Hsuan Chiu
郭重顯
Chung-Hsien Kuo
姚智原
Chih-Yuan Yao
學位類別: 博士
Doctor
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 英文
論文頁數: 128
中文關鍵詞: Operator InterfaceVisual MomentumTethered ViewSpatial CuesTelerobot
外文關鍵詞: Operator Interface, Visual Momentum, Tethered View, Spatial Cues, Telerobot
相關次數: 點閱:133下載:9
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Telerobot navigation requires human operators to have sufficient knowledge on remote situation, known as remote perception. Insufficient remote perception increases the risk of telerobot failures. The enhancement of remote perception often relates to the delivery of visual feedback in operator interfaces. Most of the recent works about operator interfaces adopted viewpoint tethering to improve situational awareness due to its exocentricity. A tethered viewpoint provides both global awareness and local guidance while visualizing remote situations. The tethered view, adhering to the line-of-sight requirement, potentialy delivers lower visual momentum when visualizing dense remote environment such as during indoor teleoperation tasks. This problem occurs as the tethered view, unlike the bird's-eye view, visualizes incomplete spatial information due to line-of-sight exclusion. Operators are thus strangled with the limited information while developing their spatial mental model, especially due to visual discontinuity that happens as the result of view transitions between robot's movements.

    This dissertation presents an approach to improve the visual momentum of a tethered viewpoint display by complementing the omitted spatial information. The approach works by augmenting simplified spatial information on the excluded areas of a tethered view. The cues are used to illuminate the basic spatial structure and thus help to preserve the visual continuity on the views. The spatial cues further assist the operators in building their spatial mental models effectively by lowering their dependency on their naturally limited working memory. The tethered viewpoint displays are thus hypotheticaly posses higher visual momentum.

    Evaluation of the presented approach was performed using V-REP simulator to simulate a telerobot environment that streams RGBD images, which is then reconstructed into 3D surface models and visualized in the tethered viewpoint displays. The evaluation involved eighteen voluntary participants to perform remote navigation tasks under different view configurations. The results suggested that applying cue augmentation in tethered viewpoint displyas resulted in a higher state of visual momentum. The heightened visual momentum was exhibited by the improvement of three common measures on operator performance. The improved measures include lowered average mental workload, the enhancement of both spatial perception and situational awareness. In particular, the lowered effort, frustration, and temporal demand provided the most influence to the lowered mental workload in average.


    Telerobot navigation requires human operators to have sufficient knowledge on remote situation, known as remote perception. Insufficient remote perception increases the risk of telerobot failures. The enhancement of remote perception often relates to the delivery of visual feedback in operator interfaces. Most of the recent works about operator interfaces adopted viewpoint tethering to improve situational awareness due to its exocentricity. A tethered viewpoint provides both global awareness and local guidance while visualizing remote situations. The tethered view, adhering to the line-of-sight requirement, potentialy delivers lower visual momentum when visualizing dense remote environment such as during indoor teleoperation tasks. This problem occurs as the tethered view, unlike the bird's-eye view, visualizes incomplete spatial information due to line-of-sight exclusion. Operators are thus strangled with the limited information while developing their spatial mental model, especially due to visual discontinuity that happens as the result of view transitions between robot's movements.

    This dissertation presents an approach to improve the visual momentum of a tethered viewpoint display by complementing the omitted spatial information. The approach works by augmenting simplified spatial information on the excluded areas of a tethered view. The cues are used to illuminate the basic spatial structure and thus help to preserve the visual continuity on the views. The spatial cues further assist the operators in building their spatial mental models effectively by lowering their dependency on their naturally limited working memory. The tethered viewpoint displays are thus hypotheticaly posses higher visual momentum.

    Evaluation of the presented approach was performed using V-REP simulator to simulate a telerobot environment that streams RGBD images, which is then reconstructed into 3D surface models and visualized in the tethered viewpoint displays. The evaluation involved eighteen voluntary participants to perform remote navigation tasks under different view configurations. The results suggested that applying cue augmentation in tethered viewpoint displyas resulted in a higher state of visual momentum. The heightened visual momentum was exhibited by the improvement of three common measures on operator performance. The improved measures include lowered average mental workload, the enhancement of both spatial perception and situational awareness. In particular, the lowered effort, frustration, and temporal demand provided the most influence to the lowered mental workload in average.

    Recommendation Letter i Approval Letter ii Abstract iii Acknowledgements v Contents vi List of Figures x List of Tables xiv List of Algorithms xv 1 Introduction 1 1.1 Background and Motivation 1 1.2 Contribution 3 1.3 Organization 4 2 Related Work 5 2.1 Operator Interface 5 2.2 Benchmarking Operator Interface 7 3 Preliminaries 10 3.1 Telerobotics 10 3.2 Viewpoint Tethering 13 3.3 Visual Momentum 14 3.4 Spatial Mental Model 16 3.5 Pioneer 3DX Robot 17 3.6 V-REP Simulator 19 3.6.1 Functionalities 21 3.6.2 Usage in telerobot simulation 24 3.7 Simulated Omnidirectional RGBD Camera 27 3.8 OpenGL 33 3.9 NVIDIA CUDA 35 3.10 Latin-square Crossover Design 38 3.11 Estimation Reporting 39 3.12 NASA - Task Load Index 41 3.13 Situational Awareness Rating Technique 44 3.14 Spatial Situation Model Assessment 46 4 Tethered View with Spatial Cue Augmentation 48 4.1 Conceptual Overview 48 4.2 Architectural Overview 49 4.3 Model Alignment 50 4.4 Cue Definition 51 4.5 Back-face Removal 54 5 Experimental Platform 61 5.1 Simulated Telerobot Environment 61 5.2 Operator Interface 64 5.2.1 Display and Input Device 64 5.2.2 Surface Reconstruction from RGBD Images 67 6 Experiments and Results 74 6.1 Experiment Design 74 6.2 Experiment Procedure 75 6.3 Assessment Metrics 76 6.4 Results 82 6.4.1 Improved Spatial Perception and Awareness 82 6.4.2 Lower Operator Workloads 83 6.4.3 User Performance 85 6.4.4 Neccesity of Viewpoint Adjustment 86 6.4.5 Neccesity of Video Feedback 86 6.4.6 Better Spatial Comprehension 88 6.5 Discussion 90 6.5.1 Enhancement on Visual Momentum 90 6.5.2 LOS Ambiguity in Viewpoint Tethering 91 6.5.3 Outlier Responses 91 7 Conclusions 93 References 95 Appendix 1: Preliminary Response 102 Appendix 2: NASA-TLX Data 103 Appendix 3: SART Data 106 Appendix 4: SSM Data 107 Appendix 5: Attention Data 108 Appendix 6: Log File Data 109 Appendix 7: Final Response 110

    [1] J. Y. C. Chen, E. C. Haas, and M. J. Barnes, “Human Performance Issues and User Interface Design for Teleoperated Robots,” IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), vol. 37, pp. 1231–1245, nov 2007.
    [2] J. Carlson and R. Murphy, “How UGVs physically fail in the field,” IEEE Transactions on Robotics, vol. 21, no. 3, pp. 423–437, 2005.
    [3] J. Casper and R. R. Murphy, “Human –Robot Interactions During the Robot-Assisted Urban Search and Rescue Response at the World Trade Center,” IEEE Transactions on Systems, Man and Cybernetics, vol. 33, no. 3, pp. 367–385, 2003.
    [4] H. Yanco, J. Drury, and J. Scholtz, “Beyond Usability Evaluation: Analysis of Human-Robot Interaction at a Major Robotics Competition,” Human-Computer Interaction, vol. 19, no. 1, pp. 117–149, 2004.
    [5] C. W. Nielsen, M. A. Goodrich, and R. W. Ricks, “Ecological Interfaces for Improving Mobile Robot Teleoperation,” IEEE Transactions on Robotics, vol. 23, pp. 927–941, oct 2007.
    [6] D. D. Woods, “Visual momentum: a concept to improve the cognitive coupling of person and computer,” International Journal of Man-Machine Studies, vol. 21, no. 3, pp. 229–244, 1984.
    [7] R. Yanu Tara and W.-C. Teng, “A Suitability Evaluation of Controlling 3D Map Viewpoint by Gamepad Orientation for Remote Navigation,” IEEE Access, vol. PP, no. 99, pp. 1–1, 2017.
    [8] D. Holz and S. Behnke, “Approximate triangulation and region growing for efficient segmentation and smoothing of range images,” Robotics and Autonomous Systems, vol. 62, pp. 1282–1293, sep 2014.
    [9] A. Kelly, N. Chan, and H. Herman, “Real-time photorealistic virtualized reality interface for remote mobile robot control,” Springer Tracts in Advanced Robotics, vol. 70, pp. 211–266, 2011.
    [10] M. Mast, M. Španěl, G. Arbeiter, and V. Štancl, “Teleoperation of Domestic Service Robots: Effects of Global 3D Environment Maps in the User Interface on Operators’ Cognitive and Performance Metrics,” in Proceeding of International Conference on Social Robotics (ICSR), 2013.
    [11] K. Saitoh, T. Machida, K. Kiyokawa, and H. Takemura, “A 2D-3D integrated interface for mobile robot control using omnidirectional images and 3D geometric models,” in Proceeding of 2006 IEEE/ ACM International Symposium on Mixed and Augmented Reality, pp. 173–176, Ieee, oct 2006.
    [12] D. Labonte, P. Boissy, and F. Michaud, “Comparative analysis of 3-D robot teleoperation interfaces with novice users.,” IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, vol. 40, pp. 1331–1342, oct 2010.
    [13] F. Ferland, F. Pomerleau, C. T. Le Dinh, and F. Michaud, “Egocentric and exocentric teleoperation interface using real-time, 3D video projection,” in Proceeding of ACM/IEEE International Conference on Human Robot Interaction (HRI), (New York, New York, USA), p. 37, ACM Press, 2009.
    [14] R. Stoakley, M. Conway, and R. Pausch, “Virtual reality on a WIM: interactive worlds in miniature,” in Proceeding of the SIGCHI ( Conference on Human Factors in Computing Systems), pp. 265–272, 1995.
    [15] F. Meijer, B. L. Geudeke, and E. L. van den Broek, “Navigating through virtual environments: visual realism improves spatial cognition.,” CyberPsychology & Behavior, vol. 12, no. 5, pp. 517–521, 2009.
    [16] C. W. Nielsen and M. a. Goodrich, “Comparing the usefulness of video and map information in navigation tasks,” in Proceeding of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction (HRI), (New York, New York, USA), p. 95, ACM Press, 2006.
    [17] T. J. M. Sanguino, J. M. A. Márquez, T. Carlson, and J. D. R. Millán, “Improving Skills and Perception in Robot Navigation by an Augmented Virtuality Assistance System,” Journal of Intelligent & Robotic Systems, mar 2014.
    [18] A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and M. Goodrich, “Common metrics for human-robot interaction,” in Proceeding of the 1st ACM SIGCHI/SIGART conference on Humanrobot interaction - HRI ’06, HRI ’06, (New York, New York, USA), p. 33, ACM Press, 2006.
    [19] R. M. Taylor, “Situational Awareness Rating Technique(SART): The development of a tool for aircrew systems design,” in Proceedings of the AGARD AMP Symposium on Situational Awareness in Aerospace Operations, CP478., 1990.
    [20] M. R. Endsley, “Situation awareness global assessment technique (sagat),” in Aerospace and Electronics Conference, 1988. NAECON 1988., Proceedings of the IEEE 1988 National, pp. 789–795, IEEE, 1988.
    [21] S. Hart and L. Staveland, “Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research,” Advances in psychology, 1988.
    [22] S. S. G. Hart, “Nasa-task load index (NASA-TLX); 20 years later,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2006.
    [23] A. M. Walker, D. P. Miller, and C. Ling, “Spatial Orientation Aware Smartphones for Tele-operated Robot Control in Military Environments: A Usability Experiment,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 57, pp. 2027–2031, 2013.
    [24] S. Livatino, F. Banno, and G. Muscato, “3-D integration of robot vision and laser data with semiautomatic calibration in augmented reality stereoscopic visual interface,” IEEE Transactions on Industrial Informatics, vol. 8, no. 1, pp. 69–77, 2012.
    [25] B. Ricks, C. Nielsen, and M. Goodrich, “Ecological displays for robot interaction: a new perspective,” in Proceeding of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 3, pp. 2855–2860, Ieee, 2004.
    [26] B. Siciliano and O. Khatib, Springer handbook of robotics. Springer, 2016.
    [27] T. GMbH, “tEODor Robot.” http://www.telerob.com/en/products/teodor, 2017.
    [28] I. Corp., “iRobot 710 Warrior Specification.” http://endeavorrobotics.com/products#510- packbot, 2012.
    [29] I. Corp., “Press Releases: iRobot Unveils Its First Multi-Robot Tablet Controller for First Responders, Defense Forces and Industrial Customers.” http://media.irobot.com/2014- 10-09-iRobot-Unveils-Its-First-Multi-Robot-Tablet-Controller-for-First- Responders-Defense-Forces-and-Industrial-Customers, 2014.
    [30] B. Mantel, P. Hoppenot, and E. Colle, “Perceiving for Acting With Teleoperated Robots: Ecological Principles to Human–Robot Interaction Design,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 42, pp. 1460–1475, nov 2012.
    [31] J. W. Lasswell, “The Effects of Display Location and Dimensionality on Taxiway Navigation.,” tech. rep., DTIC Document, 1995.
    [32] W. Wang and P. Milgram, “Dynamic Viewpoint Tethering for Navigation in Large-scale Virtual Environments,” Proceeding of the Human Factors and Ergonomics Society Annual Meeting, vol. 45, no. 27, pp. 1862–1866, 2001.
    [33] C. D. Wickens and T. T. Prevett, “Exploring the dimensions of egocentricity in aircraft navigation displays.,” Journal of Experimental Psychology, vol. 1, no. 2, pp. 110–135, 1995.
    [34] W. Wang and P. Milgram, “Viewpoint animation with a dynamic tether for supporting navigation in a virtual environment.,” Human Factors, vol. 51, no. 3, pp. 393–403, 2009.
    [35] J. G. Hollands and M. Lamb, “Viewpoint tethering for remotely operated vehicles: effects on complex terrain navigation and spatial awareness.,” Human Factors, vol. 53, pp. 154–167, 2011.
    [36] J. Hochberg and V. Brooks, “Film cutting and visual momentum,” In the mind’s eye: Julian Hochberg on the perception of pictures, films, and the worlds eye, pp. 206–228, 1978.
    [37] K. B. Bennett and J. M. Flach, “Visual momentum redux,” International Journal of Human Computer Studies, vol. 70, no. 6, pp. 399–414, 2012.
    [38] J. M. Carroll and J. R. Olson, eds., Mental Models in Human-computer Interaction: Research Issues About What the User of Software Knows. Washington, DC, USA: National Academy Press, 1987.
    [39] D. A. Norman, “Design principles for cognitive artifacts,” Research in Engineering Design, vol. 4, no. 1, pp. 43–50, 1992.
    [40] K. B. Bennett and J. M. Flach, Display and interface design: Subtle science, exact art. CRC Press, 2011.
    [41] Adept MobileRobot, “Pioneer 3-DX.” http://www.mobilerobots.com/researchrobots/ pioneerp3dx.aspx.
    [42] “Pioneer 3DX in Google Scholar.” https://scholar.google.com/scholar?hl=en&q= %22pioneer+3dx%22, 2017.
    [43] J. Shao, G. Xie, and L. Wang, “Leader-following formation control of multiple mobile vehicles,” IET Control Theory & Applications, vol. 1, no. 2, pp. 545–552, 2007.
    [44] I. Farkhatdinov, J.-H. Ryu, and J. Poduraev, “A user study of command strategies for mobile robot teleoperation,” Intelligent Service Robotics, vol. 2, pp. 95–104, mar 2009.
    [45] E. Rohmer, S. P. N. Singh, and M. Freese, “V-REP: A versatile and scalable robot simulation framework,” in Proceeding of IEEE/ RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1321–1326, 2013.
    [46] N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” Proceeding of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 3, pp. 2149–2154, 2004.
    [47] O. Michel, “Webots TM : Professional Mobile Robot Simulation,” International Journal of Advanced Robotic Systems, vol. 1, no. 1, pp. 39–42, 2004.
    [48] M. a. Rupp, P. Oppold, and D. S. McConnell, “Comparing the Performance, Workload, and Usability of a Gamepad and Joystick in a Complex Task,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 57, no. 1, pp. 1775–1779, 2013.
    [49] M. Rahman, S. Gustafson, P. Irani, and S. Subramanian, “Tilt techniques: investigating the dexterity of wrist-based input,” in Proceedings of the 27th international conference on Human factors in computing systems - CHI 09, p. 1943, 2009.
    [50] E. Grandjean, Fitting The Task to The Man, A Text book of Occupational Ergonomics. No. 4 th edition, Taylor & Francis/Hemisphere, 1988.
    [51] D. Huber, H. Herman, A. Kelly, P. Rander, and J. Ziglar, “Real-time photo-realistic visualization of 3D environments for enhanced tele-operation of vehicles,” in IEEE 12th International Conference on Computer Vision Workshops (ICCV), pp. 1518–1525, Ieee, sep 2009.
    [52] A. Mcnamara, “Visual Perception in Realistic Image Synthesis,” in Proceeding of Eurographics STAR, vol. 20, pp. 211–224, 2001.
    [53] D. Scaramuzza, A. Harati, and R. Siegwart, “Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes,” in Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, pp. 4164–4169, 2007.
    [54] H. Alismail, L. D. Baker, and B. Browning, “Automatic Calibration of a Range Sensor and Camera System,” in 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on, pp. 286–292, 2012.
    [55] R. C. Luo and C.-C. Chang, “Multisensor Fusion and Integration: A Review on Approaches and Its Applications in Mechatronics,” IEEE Transactions on Industrial Informatics, vol. 8, pp. 49–60, feb 2012.
    [56] J. Y. C. Chen and J. E. Thropp, “Review of Low Frame Rate Effects on Human Performance,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 37, pp. 1063– 1076, nov 2007.
    [57] S.-J. Kim, J. D. K. Kim, S.-W. Han, B. Kang, K. Lee, and C.-Y. Kim, “A 640x480 image sensor with unified pixel architecture for 2D/3D imaging in 0.11um CMOS,” 2011.
    [58] S.-J. Kim, J. D. K. Kim, B. Kang, and K. Lee, “A CMOS Image Sensor Based on Unified Pixel Architecture With Time-Division Multiplexing Scheme for Color and Depth Image Acquisition,” 2012.
    [59] D. Shreiner, G. Sellers, J. Kessenich, and B. Licea-Kane, OpenGL programming guide: The Official guide to learning OpenGL, version 4.3. Addison-Wesley, 2013.
    [60] J. Cheng, M. Grossman, and T. McKercher, Professional Cuda C Programming. John Wiley & Sons, 2014.
    [61] M. Kutner, C. Nachtsheim, J. Neter, and W. Li, “Latin Square Crossover Design,” in Applied Linear Statistical Models, pp. 1200–1201, McGraw-Hill Irwin, 2005.
    [62] P. Dragicevic, F. Chevalier, and S. Huot, “Running an HCI experiment in multiple parallel universes,” in Proceedings of ACM conference on Human factors in computing systems (CHI EA ’14), pp. 607– 618, 2014.
    [63] P. Dragicevic, Fair Statistical Communication in HCI, pp. 291–330. Cham: Springer International Publishing, 2016.
    [64] A. P. Association, Publication manual of the American Psychological Association, sixth edition. 2010.
    [65] G. Cumming, “The new statistics: Why and how,” Psychological Science, vol. 25, no. 1, pp. 7–29, 2014.
    [66] B. Efron and R. J. Tibshirani, An introduction to the bootstrap. CRC press, 1994.
    [67] K. N. Kirby and D. Gerlanc, “BootES: An R package for bootstrap confidence intervals on effect sizes.,” Behavior Research Methods, vol. 45, no. 4, pp. 905–27, 2013.
    [68] B. Efron, “Better bootstrap confidence intervals,” Journal of the American statistical Association, vol. 82, no. 397, pp. 171–185, 1987.
    [69] K. C. Hendy, K. M. Hamilton, and L. N. Landry, “Measuring subjective workload: when is one scale better than many?,” Human Factors, vol. 35, no. 4, pp. 579–601, 1993.
    [70] Y. Liu and C. D. Wickens, “Mental workload and cognitive task automaticity: an evaluation of subjective and time estimation metrics,” Ergonomics, vol. 37, no. 11, pp. 1843–1854, 1994.
    [71] J. C. Byers, A. C. Bittner, and S. G. Hill, “Traditional and raw task load index (TLX) correlations: Are paired comparisons necessary,” Advances in industrial ergonomics and safety I, pp. 481–485, 1989.
    [72] M. R. Endsley, “Toward a theory of situation awareness in dynamic systems,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 37, no. 1, pp. 32–64, 1995.
    [73] D. G. Jones and M. R. Endsley, “Can Real-Time Probes Provide a Valid Measure of Situation Awareness?,” Response, 2000.
    [74] P. Vorderer, W. Wirth, F. Gouveia, F. Biocca, T. Saari, F. Jäncke, S. Böcking, H. Schramm, A. Gysbers, T. Hartmann, C. Klimmt, J. Laarni, N. Ravaja, A. Sacau, T. Baumgartner, and P. Jäncke, “MEC Spatial Presence Questionnaire (MEC- SPQ): Short Documentation and Instructions for Application,” tech. rep., 2004.
    [75] A. Sacau, J. Laarni, N. Ravaja, and T. Hartmann, “The Impact of Personality Factors on the Experience of Spatial Presence,” Presence 2005, pp. 143–151, 2005.
    [76] A. Morrison, A. Oulasvirta, P. Peltonen, S. Lemmela, G. Jacucci, G. Reitmayr, J. Näsänen, and A. Juustila, “Like bees around the hive: a comparative study of a mobile augmented reality map,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1889–1898, 2009.
    [77] M. Yanagisawa and K. Akahori, “The Effect of Visual Discontinuity on Spatial Cognition,” Journal of Human Interface Society, pp. 37–44, 1999.
    [78] M. R. Beck, M. C. Lohrenz, and J. G. Trafton, “Measuring search efficiency in complex visual search tasks: global and local clutter.,” Applied Journal of Experimental Psychology, vol. 16, no. 3, pp. 238– 250, 2010.
    [79] C. D. Wickens, M. S. Ambinder, and A. L. Alexander, “The Role of Highlighting In Visual Search Through Maps,” in Proceeding of Human Factors and Ergonomics Society 48th Annual Meeting, 2004.
    [80] A. Nunes, C. Wickens, and S. Yin, “Examining the Viability of the Neisser Search Model in the Flight,” in Proceeding of Human Factors and Ergonomics Society 50th Annual Meeting, pp. 35–39, 2006.
    [81] S. Holzer, R. B. Rusu, M. Dixon, S. Gedikli, and N. Navab, “Adaptive neighborhood selection for realtime surface normal estimation from organized point cloud data using integral images,” in Proceeding of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2684–2689, Ieee, oct 2012.
    [82] R. Y. Tara and W.-C. Teng, “Omnidirectional Camera with Unified RGBD Sensor for Mapping Remote Environments,” in Proceeding of IEEE International Conference on System, Man, and Cybernetics (IEEE SMC), 2015.
    [83] EyeTribe, “The EyeTribe.” http://theeyetribe.com, 2016.
    [84] C. Gonzalez and J. Wimisberg, “Situation Awareness in Dynamic Decision Making : Effects of Practice and Working Memory,” Human Factors, vol. 1, no. 1, pp. 56–74, 2007.
    [85] K. R. Johannsdottir and C. M. Herdman, “The role of working memory in supporting drivers’ situation awareness for surrounding traffic.,” Human Factors, vol. 52, no. 6, pp. 663–673, 2010. \n86] C. Wickens, M. Vincow, and M. Yeh, “Design applications of visual spatial thinking: The importance of frame of reference,” in The Cambridge Handbook of Visuospatial Thinking, pp. 383–425, Cambridge University Press, 2005.

    QR CODE