簡易檢索 / 詳目顯示

研究生: 許哲豪
Che-Hao Hsu
論文名稱: 低成本可攜式多視點多媒體系統之研究
A study of low-cost portable multi-viewpoint multimedia system
指導教授: 花凱龍
Kai-lung Hua
口試委員: 戴文凱
Wen-Kai Tai
楊傳凱
Chuan-Kai Yang
鍾聖倫
Sheng-Luen Chung
鄭文皇
Wen-Huang Cheng
蘇柏齊
Po-Chyi Su
王家慶
Jia-Ching Wang
學位類別: 博士
Doctor
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 英文
論文頁數: 109
中文關鍵詞: 多視點360 度顯示器空中互動攝影機陣列排列
外文關鍵詞: multi-viewpoint, 360-degree displays, mid-air interaction, camera array arrangement
相關次數: 點閱:236下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著電子技術的快速發展,科學家們發展出許多可攜式影像擷取、顯示及互動之方法與系統,時至今日早已從筆記型電腦、平板電腦進步到智慧型手機。雖然這些技術已有極佳的表現,但人類追求更身歷其境的腳步卻不曾停竭。在這項研究工作中,提出了一套完整低成本可擕式多視點多媒體系統,包含多視點影像擷取、顯示和互動三個子系統。我們有信心這項研究工作將對相關研究社群產生一定的影響力,同時改變人們記錄及重現精采生活的方式。
    在取像子系統中,本研究提出一種手持多視點攝影機陣列及對應的使用者介面。所設計的手持多視點相機陣列可快速改變各種排列方式,包括外擴、平行及匯聚類型。提出之演算法能夠有效地對齊多個視點圖像,因此該系統非常適用於如全景視頻拼接技術、自動立體顯示技術、子彈時間視覺特效及三維場景重建等應用。
    在顯示子系統中,本研究提出一種可攜式圓柱形360 度之自動立體顯示系統。該系統由三部分組成:光學結構(使圓柱螢幕上正確顯示背投影影像) 、投影影像轉換的工作流程(矯正影像變形和生成多視點圖像) 及360 度運動檢測模組(用於確認使用者位置並提供對應的視圖)。而基於提出之設計,只需一台市售微型投影機投影於圓柱形螢幕,使用者不需要佩戴特殊的眼鏡,只需透過附加於柱狀幕上的特殊厚光柵片觀賞,即可明顯提昇深度感知(立體感)。經使用者研究後證實提出之方法提供了令人滿意的深度知覺(包括雙目視差、光影分佈和線性透視) ,在不同的觀看距離和角度下皆沒有明顯不適。
    在互動子系統中,本研究提出一種利用變形錯覺產生類全像式影像之桌面互動系統。提出之系統可透過高效臉部特徵比對演算法得知使用者視點並使其同步產生變形影像顯示於平躺之螢幕上,彷彿虛虛擬物件站立於螢幕表面。使用者觀賞影像不需配戴任何額外裝置即可有極佳的立體感受。此外,更進一步利用紅外線攝影機來辨識手勢,令使用者可與虛擬物件直接互動。
    本研究所提出之各項多媒體系統皆具有方便使用、低生產成本,高可攜性和移動性等優點。而其非常適合如博物館虛擬展覽、遠端會議和多使用者線上遊戲等應用。未來我們希望能進一步改進,使三個子系統能得到更佳的整合。


    Along with the rapid development of electronic technology, the methods and tools, that researchers and scientists have developed in order to facilitate the solution of portable image acquisition, display and interaction, have been progressed from laptop, tablet, and nowadays smartphone. Though the technology has shown great improvement, human beings are keeping pursuing better solution that provides fully immersive, totally convincing virtual reality. In this work, a complete low-cost portable multi-viewpoint multimedia system, that includes multi-viewpoint image acquisition, display, and interaction three sub-systems, is presented. We feel confident that this proposed work will make an impact to the related research community and helps people with better documentation and getting closer to be living in a simulation without being able to tell the difference.
    For the novel acquisition sub-system, a handheld multi-viewpoint camera array and the corresponding user interface are proposed. The designed handheld multiviewpoint camera array is configurable in various arrangements, including divergence, parallel and converge type. Since the proposed algorithm is able to efficiently align multiple viewpoint images, this proposed configurable multi-viewpoint camera array system is suitable for many applications, such as panoramic video stitching, autostereoscopic 3D displays, bullet-time visual effect and 3D scene reconstruction.
    For the display sub-system, a portable 360-degree cylindrical autostereoscopic 3D display system is presented. The proposed system consists of three parts: the optical architecture (for back-projecting image correctly on the cylindrical screen), the projection image transformation workflow (for image rectifying and generating multi-viewpoint images), and the 360-degree motion detection module (for identifying viewers’ locations and providing the corresponding views). Based on the proposed design, only one commercial micro projector is employed for the proposed cylindrical screen. The proposed display offers great depth perception (stereoacuity) with a special designed thick barrier sheet attached to the screen. The viewers are not required to wear special glasses. The user study verified that the proposed display offers satisfactory depth perception (binocular parallax, shading distribution, and linear perspective) for various viewing distances and angles without noticeable discomfort.
    For the interaction sub-system, an anamorphic illusion interactive holographiclike tabletop system is proposed. The proposed system is able to synthesize anamorphic images according to the user’s viewpoints, identified via the developed efficient facial feature matching algorithm, on a horizontally-located monitor in real time. Therefore, users would view an image with a strong stereo sense without wearing any extra devices. Besides, by further exploiting infrared cameras to recognize hand gestures, users are allowed to interact with the virtual objects directly.
    The overall proposed multimedia system has the advantages of ease of use, low production cost, high portability and mobility. It is suitable for various applications, such as museum virtual exhibition, remote meeting, and multi-user online game. In the future, we would like to further improve the proposed system by integrating three sub-systems better.

    目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 表目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 圖目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2 Handheld Multi-viewpoint Camera Array . . . . . . . . . . . . . . . . . . . . . . 18 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3 Implementation . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.1 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.2 Camera Arrangement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.3 Camera Calibration . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 29 2.3.4 Views Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4.1 Panorama Image (Divergence Arrangement) . . . . . . . . . . . . . . . . . . . 33 2.4.2 Autostereoscopic 3D Display (Parallel Arrangement) . . . . . . . . . . . . . 35 2.4.3 Bullet-time Effect (Convergence Arrangement) . . . . . . . . . . . . . . . . 36 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3 Desktop 360-degree display . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.1 Direct Image Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.2 Integral Image Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.3 Volumetric Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2.4 Holographic Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.2.5 System Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.3 System Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.3.1 Optical Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.3.2 Projection Image Transformation Workflow . . . . . . . . . . . . . . . . . . 56 3.4 Experiment Results and Discussions . . . . . . . . . . . . . . . . . . . . . . 64 3.4.1 Prototype Cost Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.4.2 Experiment-I : Display Surface Shape . . . . . . . . . . . . . . . . . . . . 66 3.4.3 Experiment-II : Simple(Black) Background Image and Thick Barrier Sheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.4.4 Experiment-III : Complex Background Image and Thick Barrier Sheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.4.5 Experiment-IV : Displayed and Real Object . . . . . . . . . . . . . . . . . . 72 3.4.6 Experiment-V : Uncomfortable . . . . . . . . . . . . . . . . . . . . . . . . 73 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4 Interactive Tabletop System . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2.1 Holographic-like Display . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2.2 Mid-Air Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.2.3 Tabletop System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.3 System Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.3.1 Hardware Installation and Calibration . . . . . . . . . . . . . . . . . . . . 81 4.3.2 Software Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.3.3 Application Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.4 Experiment Results and Discussions . . . . . . . . . . . . . . . . . . . . . . 89 4.4.1 Static Visual Perception . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4.2 Motion Parallax Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.4.3 Mid-Air Intuitive Interaction . . . . . . . . . . . . . . . . . . . . . . . . 92 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 附錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 授權書. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

    [1] Replay-Technology, “FreeD - free dimensional video.” http:// replaytechnologies.com, 2015.
    [2] Y.-S. Kang and Y.-S. Ho, “An efficient image rectification method for parallel multi-camera arrangement,” IEEE Trans. on Consumer Electronics, vol. 57, no. 3, pp. 1041–1048, 2011.
    [3] Y. Nomura, L. Zhang, and S. K. Nayar, “Scene collages and flexible camera arrays,” in Proc. of Eurographics conf. on Rendering Techniques, pp. 127–138, Eurographics Association, 2007.
    [4] D. Lee, “Bullet time and the matrix.” https:// www.youtube.com/ watch?v=bKEcElcTUMk, 2015.
    [5] A. Geiger, J. Ziegler, and C. Stiller, “Stereoscan: Dense 3D reconstruction in real-time,” in Symp. of Intelligent Vehicles, pp. 963–968, IEEE, 2011.
    [6] B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics, vol. 24, no. 3, pp. 765–776, 2005.
    [7] Y.-S. Kang, C. Lee, and Y.-S. Ho, “An efficient rectification algorithm for multi-view images in parallel camera array,” in 3DTV Conf.: The True Vision-Capture, Transmission and Display of 3D Video, pp. 61–64, IEEE, 2008.
    [8] Y.-S. Kang and Y.-S. Ho, “Geometrical compensation algorithm of multiview image for arc multi-camera arrays,” in Advances in Multimedia Information Processing-PCM, pp. 543–552, Springer, 2008.
    [9] Breeze-System, “Dslr remote pro multi-camera.” http://breezesys.com/Multi-Camera/, 2015.
    [10] P. Debevec, “The light stages and their applications to photoreal digital actors,”SIGGRAPH Asia Technical Briefs, 2012.
    [11] D. Chen and R. Sakamoto, “Optimizing infinite homography for bullet-time effect,” in ACM SIGGRAPH Posters, 2014.
    [12] N. Akechi, I. Kitahara, R. Sakamoto, and Y. Ohta, “Multi-resolution bullettime effect,” in SIGGRAPH Asia Posters, 2014.
    [13] K. Ikeya, K. Hisatomi, M. Katayama, T. Mishina, and Y. Iwadate, “Bullet time using multi-viewpoint robotic camera system,” in Proc. of European Conf. on Visual Media Production, 2014.
    [14] Y. Wang, J. Wang, and S.-F. Chang, “CamSwarm: Instantaneous smartphone camera arrays for collaborative photography,” arXiv preprint arXiv:1507.01148, 2015.
    [15] Lytro, “Illum.” https://illum.lytro.com/illum, 2015.
    [16] K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: An ultra-thin high performance monolithic camera array,” ACM Trans. on Graphics, vol. 32, no. 6, p. 166, 2013.
    [17] C. Dickie, N. Fellion, and R. Vertegaal, “FlexCam: using thin-film flexible oled color prints as a camera array,” in CHI Extended Abstracts on Human Factors in Computing Systems, pp. 1051–1054, ACM, 2012.
    [18] Y. Wang, J. Wang, and S.-F. Chang, “PanoSwarm: Collaborative and synchronized multi-device panoramic photography,” arXiv preprint arXiv:1507.01147, 2015.
    [19] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. on PAMI, vol. 22, no. 11, pp. 1330–1334, 2000.
    [20] C. Loop and Z. Zhang, “Computing rectifying homographies for stereo vision,”in IEEE Conf. of CVPR, vol. 1, IEEE, 1999.
    [21] S. Ullman, “The interpretation of structure from motion,” Proc. of the Royal Society of London B: Biological Sciences, vol. 203, no. 1153, pp. 405–426, 1979.
    [22] V. Nozick, “Camera array image rectification and calibration for stereoscopic and autostereoscopic displays,” Annals of telecommunications-annales des télécommunications, vol. 68, no. 11-12, pp. 581–596, 2013.
    [23] J. Yang, Z. Ding, F. Guo, and H. Wang, “Multiview image rectification algorithm for parallel camera arrays,” Journal of Electronic Imaging, vol. 23, no. 3, pp. 033001–033001, 2014.
    [24] P. F. Alcantarilla and T. Solutions, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” IEEE Trans. on PAMI, vol. 34, no. 7, pp. 1281–1298, 2011.
    [25] D. G. Lowe, “Object recognition from local scale-invariant features,” in Conf. of Computer vision, vol. 2, pp. 1150–1157, IEEE, 1999.
    [26] V. Nozick and H. Saito, “On-line free-viewpoint video: From single to multiple view rendering,” Int’l Journal of Automation and Computing, vol. 5, no. 3, pp. 257–267, 2008.
    [27] C. Lipski, C. Linz, K. Berger, A. Sellent, and M. Magnor, “Virtual video camera: Image-based viewpoint navigation through space and time,” in Computer Graphics Forum, vol. 29, pp. 2555–2568, Wiley Online Library, 2010.
    [28] M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int’l journal of computer vision, vol. 74, no. 1, pp. 59–73, 2007.
    [29] K. Jung, J.-I. Park, and B.-U. Choi, “Interactive auto-stereoscopic display with efficient and flexible interleaving,” Optical Engineering, vol. 51, no. 2, pp. 027402–1, 2012.
    [30] T. Kawanishi, M. Tsuchida, S. Takagi, and H. Murase, “Small cylindrical display for anthropomorphic agents,” in Proc. Int’l Conf. Multimedia and Expo, vol. 2, pp. II-85-8, 2003.
    [31] J.-Y. Lin, Y.-Y. Chen, J.-C. Ko, H.-S. Kao, W.-H. Chen, T.-H. Tsai, S.-C. Hsu, and Y.-P. Hung, “i-m-Tube: an interactive multi-resolution tubular display,” in Proc. of ACM Conf. Multimedia, pp. 253-260, 2009.
    [32] A. Yagi, M. Imura, Y. Kuroda, and O. Oshiro, “360-degree fog projection interactive display,” in SIGGRAPH Asia Emerg. Tech., pp. 19:1-19:1, 2011.
    [33] G. Beyer, F. Alt, J. Müller, A. Schmidt, K. Isakovic, S. Klose, M. Schiewe, and I. Haulsen, “Audience behavior around large interactive cylindrical screens,” in Proc. of Human factors in computing systems, pp. 1021-1030, 2011.
    [34] H. Benko, A. D. Wilson, and R. Balakrishnan, “Sphere: multi-touch interactions on a spherical display,” in Proc. of UIST, pp. 77-86, 2008.
    [35] Barco, “Rp-360 360 degree immersive dome setup for flight training.” http://www.barco.com/en/product/2337, 2014.
    [36] R. Otsuka, T. Hoshino, and Y. Horry, “360 degrees-viewable display of 3D solid images,” in ACM SIGGRAPH posters, 2007.
    [37] D. Zhao, B. Su, G. Chen, and H. Liao, “360 degree viewable floating autostereoscopic display using integral photography and multiple semitransparent mirrors,”Optics express, vol. 23, no. 8, pp. 9812–9823, 2015.
    [38] Dicolor, “Cylindrical led display.” http:// www.dicolorled.com/ Cylindrical_LED_Display.html, 2014.
    [39] T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: Cylindrical 3D display viewable from 360 degrees,” Journal of visual communication and image representation, vol. 21, no. 5, pp. 586–594, 2010.
    [40] K. Tanaka, J. Hayashi, M. Inami, and S. Tachi, “TWISTER: an immersive autostereoscopic display,” in Virtual Reality, pp. 59-278, IEEE, 2004.
    [41] S.-M. Liu, C.-F. Chen, and K.-C. Chou, “The design and implementation of a low-cost 360-degree color led display system,” IEEE Trans. on Consumer Electronics, vol. 57, no. 2, pp. 289-296, 2011.
    [42] S. Yoshida, M. Kawakita, and H. Ando, “Light-field generation by several screen types for glasses-free tabletop 3D display,” in Proc. of 3DTV Conf., pp. 1-4, 2011.
    [43] A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 degree viewable 3D display.,” in Proc. of UIST, pp. 569-576, 2011.
    [44] M.-U. E. . G. B. . J.-H. P. . N. K. . K.-C. K. . Y.-H. J. . K.-H. Yoo, “Full-parallax 360 degrees horizontal viewing integral imaging using anamorphic optics,” in Proc. of SPIE, vol. 7863, pp. 78630U-78630U-8, 2011.
    [45] D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating threedimensional display viewable from 360 degrees,” in Proc. of SPIE, vol. 8288, pp. 82881H-82881H-6, 2012.
    [46] M.-U. Erdenebat, K.-C. Kwon, K.-H. Yoo, G. Baasantseren, J.-H. Park, E.-S. Kim, and N. Kim, “Vertical viewing angle enhancement for the 360 degree integral-floating display using an anamorphic optic system,” Optics letters, vol. 39, no. 8, pp. 2326–2329, 2014.
    [47] Qube-LED, “3D led matrix.” http://qubeled.com, 2014.
    [48] H. Kimura, T. Uchiyama, and H. Yoshikawa, “Laser produced 3D display in the air,” in ACM SIGGRAPH Emerg. Tech., 2006.
    [49] H. Kimura, A. Asano, I. Fujishiro, A. Nakatani, and H. Watanabe, “True 3D display,” in ACM SIGGRAPH Emerg. Tech., pp. 20:1-20:1, 2011.
    [50] J. Lewis, C. Verber, and R. McGhee, “A true three-dimensional display,” IEEE Trans. on Electron Devices, vol. 18, no. 9, pp. 724-732, 1971.
    [51] K. Langhans, A. Kreft, and H. T. Wörden, “SOLIDFELIX: a transportable 3D static volume display,” in Proc. of SPIE, vol. 7237, pp. 72371W-72371W-10,
    2009.
    [52] P. Soltan, M. E. Lasher, W. J. Dahlke, N. P. Acantilado, and M. McDonald, “Laser-projected 3D volumetric displays,” in Proc. of SPIE, vol. 3091, pp. 96-109, 1997.
    [53] D. Bahr, K. Langhans, D. Bezecny, D. Homann, and C. Vogt, “FELIX: a volumetric 3D imaging technique,” in Proc. of SPIE, vol. 3101, pp. 202-210, 1997.
    [54] R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: A novel approach to the display and transmission of 360 degrees-viewable 3D solid images,” IEEE Trans. on Visualization and Computer Graphics, vol. 12, no. 2, 2006.
    [55] G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. Giovinco, M. J. Richmond, and W. S. Chun, “100-million-voxel volumetric display,” in Proc. of SPIE, vol. 4712, pp. 300-312, 2002.
    [56] A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360 degree light field display,” ACM Trans. on Graphics, vol. 26, no. 3, 2007.
    [57] K.-S. Kim, H. Jeon, S. K. Lee, H. Kim, and J. Hahn, “360-degree table-top display with rotating transmissive screen,” in SPIE OPTO, pp. 93850Q–93850Q, International Society for Optics and Photonics, 2015.
    [58] J.-F. Xing, H.-J. Gong, L. Li, and W.-P. Pan, “A highly parallel beam-addressed true three-dimensional volumetric display system,” in Symp. Photonics and Optoelectronic, pp. 1-5, 2010.
    [59] G. Park, J. Kim, K. Hong, and B. Lee, “Three-dimensional floating display by a concave cylindrical mirror and rotational wedge prisms,” in Proc. of SPIE, vol. 8280, pp. 82800C-82800C-6, 2012.
    [60] K. Ito, H. Kikuchi, H. Sakurai, I. Kobayashi, H. Yasunaga, H. Mori, K. Tokuyama, H. Ishikawa, K. Hayasaka, and H. Yanagisawa, “360-degree autostereoscopic display,” in ACM SIGGRAPH Emerg. Tech., 2010.
    [61] C.-J. Yan, X. Liu, D. Liu, J. Xie, X.-X. Xia, and H.-F. Li, “Omnidirectional multiview three-dimensional display based on direction-selective light-emitting diode array,” Optical Engineering, vol. 50, no. 3, pp. 034003-034003-6, 2011.
    [62] T. Yamaguchi, T. Fujii, and H. Yoshikawa, “Computer-generated cylindrical hologram,” in Digital Holography and Three-Dimensional Imaging, p. DTuB10, Optical Society of America, 2007.
    [63] Y.-S. Cheng, Y.-T. Su, and C.-H. Chen, “360-degree viewable image-plane disktype multiplex holography by one-step recording,” Optical Express, vol. 18, no. 13, pp. 14012-14023, 2010.
    [64] H. Yabu, Y. Takeuchi, K. Yoshimoto, H. Takahashi, and K. Yamada, “360-degree three-dimensional flat panel display using holographic optical elements,” in IS&T/SPIE Electronic Imaging, pp. 93910M–93910M, International Society for Optics and Photonics, 2015.
    [65] S. Reichelt, R. Häussler, G. Fütterer, and N. Leister, “Depth cues in human visual perception and their realization in 3D displays,” in Proc. of SPIE, vol. 7690, pp. 76900B-76900B-12, 2010.
    [66] P.-Y. Shih, A. Paul, J.-F. Wang, and Y.-H. Chen, “Speech-driven talking face using embedded confusable system for real time mobile multimedia,” Multimedia Tools and Applications, vol. 73, no. 1, pp. 417–437, 2014.
    [67] H.-C. Tsai, B.-W. Chen, J.-F. Wang, and A. Paul, “Enhanced long-range personal identification based on multimodal information of human features,” Multimedia Tools and Applications, vol. 73, no. 1, pp. 291–307, 2014.
    [68] O. Hilliges, D. Kim, S. Izadi, M. Weiss, and A. Wilson, “HoloDesk: Direct 3D interactions with a situated see-through display,” in Proc. of CHI, pp. 2421–2430, ACM, 2012.
    [69] J. Lee, A. Olwal, H. Ishii, and C. Boulanger, “SpaceTop: Integrating 2D and spatial 3D interactions in a see-through desktop environment,” in Proc. of CHI, pp. 189–192, ACM, 2013.
    [70] Displair, “3D multi-touch fog screen.” http://displair.com/, 2015.
    [71] D. Van Krevelen and R. Poelman, “A survey of augmented reality technologies, applications and limitations,” International Journal of Virtual Reality, vol. 9, no. 2, p. 1, 2010.
    [72] D. Leithinger, S. Follmer, A. Olwal, S. Luescher, A. Hogge, J. Lee, and H. Ishii, “Sublimate: State-changing virtual and physical rendering to augment interaction with shape displays,” in Proc. of CHI, pp. 1441–1450, ACM, 2013.
    [73] Y. Fei, D. Kryze, and A. Melle, “Tavola: Holographic user experience,” in Proc. of SIGGRAPH Emerg. Tech., pp. 21:1–21:1, ACM, 2012.
    [74] C. Weichel, M. Lau, D. Kim, N. Villar, and H. W. Gellersen, “MixFab: A mixed-reality environment for personal fabrication,” in Proc. of CHI, pp. 3855–3864, ACM, 2014.
    [75] N. Bogdan, T. Grossman, and G. Fitzmaurice, “Hybridspace: Integrating 3D freehand input and stereo viewing into traditional desktop applications,” in Symp. of 3DUI, pp. 51–58, IEEE, March 2014.
    [76] C. Borst and M. Prachyabrued, “Nonuniform and adaptive coupling stiffness for virtual grasping,” in Virtual Reality, pp. 35–38, IEEE, March 2013.
    [77] J. Hunt, B. Nickel, and C. Gigault, “Anamorphic images,” American Journal of Physics, vol. 68, no. 3, pp. 232–237, 2000.
    [78] LeapMotion, “Leap motion controller.” https://www.leapmotion.com/, 2015.
    [79] M. Hachet, B. Bossavit, A. Cohé, and J.-B. de la Rivière, “Toucheo: Multitouch and stereo combined in a seamless workspace,” in Proc. of UIST, pp. 587–592, ACM, 2011.
    [80] S. Han and J. Park, “Holo-Haptics: Haptic interaction with a see-through 3D display,” in Conf. of ICCE, pp. 512–513, Jan 2014.
    [81] H. Bai, L. Gao, J. El-Sana, and M. Billinghurst, “Markerless 3D gesture-based interaction for handheld augmented reality interfaces,” in Symp. of ISMAR, pp. 1–6, IEEE, Oct 2013.
    [82] F. Ferreira, M. Cabral, O. Belloc, G. Miller, C. Kurashima, R. de Deus Lopes, I. Stavness, J. Anacleto, M. Zuffo, and S. Fels, “Spheree: A 3D perspectivecorrected interactive spherical scalable display,” in Proc. of SIGGRAPH Poster, pp. 86:1–86:1, ACM, 2014.
    [83] K. Kim, J. Bolton, A. Girouard, J. Cooperstock, and R. Vertegaal, “Telehuman: Effects of 3D perspective on gaze and pose estimation with a life-size cylindrical telepresence pod,” in Proc. of CHI, pp. 2531–2540, ACM, 2012.
    [84] T. Hoshi, M. Takahashi, K. Nakatsuma, and H. Shinoda, “Touchable holography,” in Proc. of SIGGRAPH Emerg. Tech., pp. 23:1–23:1, ACM, 2009.
    [85] H. Benko, R. Jota, and A. Wilson, “MirageTable: Freehand interaction on a projected augmented reality tabletop,” in Proc. of CHI, pp. 199–208, ACM, 2012.
    [86] A. D. Wilson and H. Benko, “Combining multiple depth cameras and projectors for interactions on, above and between surfaces,” in Proc. of UIST, pp. 273–282, ACM, 2010.
    [87] B. Yoo, J.-J. Han, C. Choi, K. Yi, S. Suh, D. Park, and C. Kim, “3D user interface combining gaze and hand gestures for large-scale display,” in Proc. of CHI EA, pp. 3709–3714, ACM.
    [88] O. Hilliges, S. Izadi, A. D. Wilson, S. Hodges, A. Garcia-Mendoza, and A. Butz, “Interactions in the air: Adding further depth to interactive tabletops,” in Proc. of UIST, pp. 139–148, ACM, 2009.
    [89] D. Mendes, F. Fonseca, B. Araujo, A. Ferreira, and J. Jorge, “Mid-air interactions above stereoscopic interactive tables,” in Symp. of 3DUI, pp. 3–10, March 2014.
    [90] D. Kim, S. Izadi, J. Dostal, C. Rhemann, C. Keskin, C. Zach, J. Shotton, T. Large, S. Bathiche, M. Nießner, D. A. Butler, S. Fanello, and V. Pradeep, “Retrodepth: 3D silhouette sensing for high-precision input on and above physical surfaces,” in Proc. of CHI, pp. 1377–1386, ACM, 2014.
    [91] G. Bruder, F. Steinicke, and W. Sturzlinger, “To touch or not to touch?: Comparing 2D touch and 3D mid-air interaction on stereoscopic tabletop surfaces,”in Proc. of SUI, pp. 9–16, ACM, 2013.
    [92] J.-E. Lee, S. Miyashita, K. Azuma, J.-H. Lee, and G.-T. Park, “Anamorphosis projection by ubiquitous display in intelligent space,” in Universal Access in Human-Computer Interaction. Intelligent and Ubiquitous Interaction Environments, pp. 209–217, Springer, 2009.
    [93] OpenCV3, “Camera calibration and 3D reconstruction.” http://docs.opencv.org/trunk/modules/calib3d/doc/calib3d.html, 2015.
    [94] S. Milborrow, J. Morkel, and F. Nicolls, “The muct landmarked face database,”Pattern Recognition Association of South Africa, 2010. http://www.milbo.org/muct.
    [95] E. Murphy-Chutorian and M. Trivedi, “Head pose estimation in computer vision: A survey,” IEEE Trans. on PAMI, vol. 31, pp. 607–626, April 2009.
    [96] M. Levoy, J. Gerth, B. Curless, and K. Pull, “The stanford 3D scanning repository.”http://graphics.stanford.edu/data/3Dscanrep/, 2005.

    無法下載圖示 全文公開日期 2021/02/03 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE