簡易檢索 / 詳目顯示

研究生: 許育翔
Yu-Hsiang Hsu
論文名稱: 植基於姿勢轉換之動作關鍵特徵變換
Distinct Motion Key Feature Transform based on Physical Pose Transition
指導教授: 賴祐吉
Yu-Chi Lai
鮑興國
Hsing-Kuo Pao
口試委員: 姚智原
Chih-Yuan Yao
林昭宏
Chao-Hung Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2013
畢業學年度: 102
語文別: 中文
論文頁數: 41
中文關鍵詞: 動作截取關鍵姿勢萃取動作特徵動作比對
外文關鍵詞: motion capture, key-pose extraction, motion feature, motion matching
相關次數: 點閱:184下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  •   動作辨識是動作擷取(motion capture) 處理中最受關注的題目之一。隨著近年來動作擷取裝置的普及,相關的應用也逐漸增加。目前常見的應用中,大多仍須仰賴人工定義連續動作中多個靜態姿勢(static pose),或是特定關節位置,來代表完整的連續動作。此方法在動作變化較大的狀況下,若人工定義之靜態姿勢數量不足,或是動作擷取裝置之取樣率(sample rate) 過低,則容易產生誤判或是漏判之現象。而若以完整之動作資料代表本身以進行比對等相關應用,則會面臨計算過於耗時、動作資料取樣率不同及動作長度不同而無法逐一比對等問題。
      為了解決上述問題,我們引入了關鍵姿勢自動萃取(key-pose extraction) 之概念,自動分析動作資料中進行動作轉換的時間點之動作資訊,作為該連續動作之特徵(feature),用以代表完整之連續動作。如此能省去人工定義關鍵靜態姿勢之麻煩,又能僅以少量自動萃取出之動作資訊來代表完整動作,大大減少計算時間及避開目標與比對動作長短不同等問題。我們所提出之方法進行相似動作萃取之應用可大量減少人工定義關鍵姿勢所需時間及關鍵姿勢好壞之不確定性,以利後續用於門禁等安全系統中之身分辨識等應用。


    Motion recognition is one of the most focused topic in motion captured data processing field. As the motion capture devices getting popular in recent years, the number of related applications also gets higher. Most of the applications rely on manually defined static poses or specific joint positions to represent the whole continuous motion sequence. This method suffers from insufficient key-poses in high-dynamic motions and low sampling rate of motion capture devices. On the other hand, if we use the whole motion sequence data for motion matching, the time consumption, variant motion lengths and distinct sampling rate problem would occur and decrease the quality of the result.
    To conquer the difficuties above, we introduce the key-frame extraction method into this problem. We detect the motion transition positions automatically, and grab the nearby information as the feature to represent the whole motion sequence. Using this method, we can save not only manual works on defining static poses but also the computation time
    due to only few local data is needed to represent the whole motion sequence, and avoid the problem of variant motion length. Our method can also be used for searching motions that is only similar on specific joints, to list all the variant motions based on user defined fixed joints. We also tested our method on handling variant skeleton parameters, to show that the key feature transform is robust enough to index between various skeletons.

    中文摘要......................................................................................................................... 3 英文摘要......................................................................................................................... 4 目錄................................................................................................................................ 5 圖目錄............................................................................................................................. 7 第1 章緒論................................................................................................................... 9 1.1 研究動機........................................................................................................ 9 1.2 研究內容與流程............................................................................................ 9 1.3 主要貢獻........................................................................................................ 10 1.4 論文架構........................................................................................................ 10 第2 章相關研究........................................................................................................... 11 2.1 動作萃取處理................................................................................................ 11 2.2 維度降低和資料壓縮.................................................................................... 12 2.3 動作分段與分類............................................................................................ 13 2.4 距離參數........................................................................................................ 13 第3 章動作關鍵特徵變換........................................................................................... 15 3.1 動作特徵偵測................................................................................................ 15 3.2 動作索引編碼................................................................................................ 19 3.2.1 球諧函數........................................................................................ 20 3.2.2 動作幾何特徵................................................................................ 20 3.2.3 動作索引產生................................................................................ 22 3.3 動作特徵比較................................................................................................ 23 第4 章相似動作萃取................................................................................................... 25 4.1 輸入動作資料................................................................................................ 25 4.2 應用方法........................................................................................................ 25 第5 章實驗結果與討論............................................................................................... 27 5.1 相似特徵連線................................................................................................ 27 第6 章結論與未來展望............................................................................................... 36 參考文獻......................................................................................................................... 37

    [1] L. Kovar, M. Gleicher, and F. Pighin, ``Motion graphs,'' ACM Trans. Graph., vol. 21,
    pp. 473--482, July 2002.
    [2] J. Assa, Y. Caspi, and D. Cohen-Or, ``Action synopsis: pose selection and illustration,''
    ACM Trans. Graph., vol. 24, pp. 667--676, July 2005.
    [3] D. G. Lowe, ``Distinctive image features from scale-invariant keypoints,'' Int. J.
    Comput. Vision, vol. 60, pp. 91--110, Nov. 2004.
    [4] M.-W. Chao, C.-H. Lin, J. Assa, and T.-Y. Lee, ``Human motion retrieval from handdrawn
    sketch,'' IEEE Transactions on Visualization and Computer Graphics, vol. 18,
    pp. 729--740, May 2012.
    [5] L. Kovar and M. Gleicher, ``Automated extraction and parameterization of motions
    in large data sets,'' ACM Trans. Graph., vol. 23, pp. 559--568, Aug. 2004.
    [6] K. Forbes and E. Fiume, ``An efficient search algorithm for motion data using
    weighted pca,'' in Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium
    on Computer animation, SCA '05, (New York, NY, USA), pp. 67--76, ACM,
    2005.
    [7] M. Muller, T. Roder, and M. Clausen, ``Efficient content-based retrieval of motion
    capture data,'' ACM Trans. Graph., vol. 24, pp. 677--685, July 2005.
    [8] B. Demuth, T. Roder, M. Muller, and B. Eberhardt, ``An Information Retrieval System
    for Motion Capture Data,'' in In Proceedings of the 28th European Conference
    on Information Retrieval (ECIR), vol. 3936, pp. 373--384, Mar. 2006.
    [9] M. Muller and T. Roder, ``Motion templates for automatic classification and retrieval
    of motion capture data,'' in Proceedings of the 2006 ACM SIGGRAPH/Eurographics
    symposium on Computer animation, SCA '06, pp. 137--146, 2006.
    [10] E. Keogh, T. Palpanas, V. B. Zordan, D. Gunopulos, and M. Cardle, ``Indexing large
    human-motion databases,'' in Proceedings of the Thirtieth international conference
    on Very large data bases - Volume 30, VLDB '04, pp. 780--791, 2004.
    26
    [11] O. C. JENKINS and M. J. Mataric, ``Performance-Derived Behavior Vocabularies:
    Data-Driven Acquisition of Skills from Motion,'' vol. 1, pp. 237--288, 2004.
    [12] E. Hsu, S. Gentry, and J. Popović, ``Example-based control of human motion,'' in
    Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer
    animation, SCA '04, pp. 69--77, 2004.
    [13] G. Liu, J. Zhang, W. Wang, and L. McMillan, ``A system for analyzing and indexing
    human-motion databases,'' in Proceedings of the 2005 ACM SIGMOD international
    conference on Management of data, SIGMOD '05, pp. 924--926, 2005.
    [14] G. Liu, J. Zhang, W. Wang, and L. McMillan, ``Human motion estimation from
    a reduced marker set,'' in Proceedings of the 2006 symposium on Interactive 3D
    graphics and games, I3D '06, (New York, NY, USA), pp. 35--42, ACM, 2006.
    [15] J. Chai and J. K. Hodgins, ``Performance animation from low-dimensional control
    signals,'' ACM Trans. Graph., vol. 24, pp. 686--696, July 2005.
    [16] Y. Sakamoto, S. Kuriyama, and T. Kaneko, ``Motion map: image-based retrieval and
    segmentation of motion data,'' in Proceedings of the 2004 ACM SIGGRAPH/Eurographics
    symposium on Computer animation, SCA '04, (Aire-la-Ville, Switzerland,
    Switzerland), pp. 259--266, Eurographics Association, 2004.
    [17] L. Ren, Statistical analysis of natural human motion for animation. PhD thesis,
    Pittsburgh, PA, USA, 2006.
    [18] A. Safonova, J. K. Hodgins, and N. S. Pollard, ``Synthesizing physically realistic
    human motion in low-dimensional, behaviorspecific spaces,'' ACM Transactions on
    Graphics, pp. 514--521, 2004.
    [19] O. Arikan, ``Compression of motion capture databases,'' ACM Trans. Graph., vol. 25,
    pp. 890--897, July 2006.
    [20] A. Shapiro, Y. Cao, and P. Faloutsos, ``Style components,'' in In Proc. of Graphics
    interface, 2006.
    [21] Y. Li, T. Wang, and H.-Y. Shum, ``Motion texture: a two-level statistical model for
    character motion synthesis,'' ACM Trans. Graph., vol. 21, pp. 465--472, July 2002.
    27
    [22] K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popović, ``Style-based inverse
    kinematics,'' ACM Trans. Graph., vol. 23, pp. 522--531, Aug. 2004.
    [23] A. Fod, M. Matari, and O. Jenkins, ``Automated derivation of primitives for movement
    classification,'' Autonomous Robots, vol. 12, no. 1, pp. 39--54, 2002.
    [24] J. Barbic, , J.-Y. Pan, C. Faloutsos, J. K. Hodgins, and N. Pollard, ``Segmenting
    motion capture data into distinct behaviors,'' in In Proceedings of Graphics Interface
    2004, pp. 185 -- 194, May 2004.
    [25] O. Arikan, D. A. Forsyth, and J. F. O'Brien, ``Motion synthesis from annotations,''
    ACM Trans. Graph., vol. 22, pp. 402--408, July 2003.
    [26] J. Lee, J. Chai, P. Reitsma, J. K. Hodgins, and N. Pollard, ``Interactive control of
    avatars animated with human motion data,'' ACM Transactions on Graphics (SIGGRAPH
    2002), vol. 21, pp. 491 -- 500, July 2002.
    [27] J. Wang and B. Bodenheimer, ``Computing the duration of motion transitions: an
    empirical approach,'' in Proceedings of the 2004 ACM SIGGRAPH/Eurographics
    symposium on Computer animation, SCA '04, pp. 335--344, 2004.
    [28] O. Arikan and D. A. Forsyth, ``Interactive motion generation from examples,'' ACM
    Trans. Graph., vol. 21, pp. 483--490, July 2002.
    [29] L. Ikemoto, O. Arikan, and D. Forsyth, ``Quick transitions with cached multi-way
    blends,'' in In ACM Symposium on Interactive 3D Graphics, pp. 145--151, 2007.
    [30] L. Kovar, J. Schreiner, and M. Gleicher, ``Footskate cleanup for motion capture
    editing,'' in Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on
    Computer animation, SCA '02, pp. 97--104, 2002.
    [31] L. Ren, A. Patrick, A. A. Efros, J. K. Hodgins, and J. M. Rehg, ``A data-driven approach
    to quantifying natural human motion,'' ACM Transactions on Graphics (SIGGRAPH
    2005), vol. 24, pp. 1090--1097, Aug. 2005.

    無法下載圖示 全文公開日期 2018/11/25 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE