簡易檢索 / 詳目顯示

研究生: 王兆葵
Chao-kuei Wang
論文名稱: 應用類神經網路之分類及建模於人形機器人三維運動之視覺學習
Neural Network Classification and Modeling for Visual Learning of Humanoid Robot
指導教授: 黃志良
Chih-Lyang Hwang
口試委員: 施慶隆
Ching-Long Shih
洪敏雄
Min-Hsiung Hung
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 中文
論文頁數: 50
中文關鍵詞: 視覺學習(或模仿)背景註冊法進行動態偵測姿勢辨識人形機器人立體視覺系統多層類神經網路
外文關鍵詞: Visual Learning (Imitation), Multilayer Neural Network.
相關次數: 點閱:271下載:21
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文提出以立體視覺系統實現人形機器人的三維運動之學習(或模仿)的任務。首先將兩隻機器人以面對面平行的方式放置,藉由指定的機器人(稱之為「表演者」)做出一系列的三維動作,再由另一機器人(稱之為「學習者」)以其身上的立體視覺系統(Stereo Vision System)擷取影像並經由相關的影像處理,例如,背景註冊法進行動態偵測(Motion Detection)、形態學濾波(Morphology Filtering)去除高頻雜訊、姿勢辨識等,得到七個特徵點(即頭部、雙手、手肘及雙腳尖),並經由立體視覺系統得知特徵點的三維世界座標,紀錄成一系列的軌跡。接著分析雙腳尖及頭部的數據,設計適當的特徵向量(Feature Vector) 並以事先學習的多層類神經網路來分類下半身動作,將代表動作(Representative Action, RA)的手尖及手肘座標以事先學習的多層類神經網路來轉換上半身手的驅動馬達與雙手尖三維座標(即所謂的反運動學(Inverse Kinematic)),結合上下半身的動作以實現人形機器人的三維運動之學習(或模仿)的任務。最後以相關實驗證明所建議方法之有效性及可行性。


    This paper proposes the visual learning (imitation) of humanoid robot by stereo vision system. At beginning, the sequence of 3-D motion of “the Performer”, which is face to face with “the Learner”, is captured by the stereo vision system (SVS) installed at “the Learner.” The proposed image processing for each sampled image includes motion detector via background registration, morphology filtering of high frequency noise, and estimation of seven feature points (i.e., head, elbows, four tips of two arms and legs). We analyzed the data of head and tips of legs to designing appropriate feature vector and classified lower body by the pre-trained multilayer neural networks (MLNN).Then the two arm tips and elbows with RA is also approximated by a pre-trained MLNN to get upper body. Combined with the RA of lower body and two arm tips of upper body is employed to achieve the visual learning (or imitation) of 3D motion of an human robot. Finally, the corresponding experiments confirm the effectiveness and feasibility of the proposed methodology.

    摘要 I Abstract II 目錄 III 圖目錄 V 表目錄 VIII 第一章 序論 1 1.1研究動機 1 1.2實驗平台與場景規劃 2 1.3實驗系統架構 6 1.4 論文架構 10 第二章 動態影像偵測與姿態辨識 11 2.1背景註冊與動態閥值 11 2.2 特徵點辨識 17 2.3 手肘位置辨識 21 2.4座標轉換 22 第三章 以類神經網路分類下半身動作和關鍵姿勢判斷 23 3.1 下半身動作分類 23 3.2 利用類神經網路分類 26 3.3代表動作判斷 32 第四章 多層類神經網路為基礎的上半身動作 之反運動學 34 4.1右手神經建模 38 4.2左手神經建模 40 第五章 實驗結果 41 第六章 結論及未來展望 47 參考文獻 48

    [1] Schaal, S. and Atkeson, C. G., “Learning control in robotics,” IEEE Robotics & Automation Magazine, vol.17, no.2, pp. 20-29, 2010.
    [2] Kober, J. and Peters, J., “Imitation and reinforcement learning,” IEEE Robotics & Automation Magazine, vol. 17, no. 2, pp. 55-62, 2010.
    [3] Kruger, V., Herzog, D., Baby, S., Ude, A., and Kragic, D., “Learning actions from observations,” IEEE Robotics & Automation Magazine, vol. 17, no. 2, pp. 30-43, 2010.
    [4] Chersi, F., “Learning through Imitation: a biological approach to robotics,” IEEE Transactions on Autonomous Mental Development, vol. 4, no. 3, pp. 204-214, 2012.
    [5] Hüser, M. and Zhang, J., “Visual programming by demonstration of grasping skills in the context of a mobile service robot using 1D-topology based self-organizing-maps,” Robotics and Autonomous Systems, vol. 60, no. 3, pp. 463-472, 2012.
    [6] Calinon, S., D'halluin, F., Sauser, E. L., Caldwell, D. G., and Billard, A. G., “Learning and reproduction of gestures by imitation,” IEEE Robotics & Automation Magazine, vol.17, no.2, pp. 44-54, 2010.
    [7] Khansari-Zadeh, S. M. and Billard, A., “Learning stable nonlinear dynamical systems with Gaussian mixture models,” IEEE Transactions on Robotics, Vol. 27, No. 5, pp. 943-957, 2011.
    [8] Ude, A., Gams, A., Asfour, T., and Morimoto, J., “Task-specific generalization of discrete and periodic dynamic movement primitives,” IEEE Transactions on Robotics, Vol. 26, No. 5, pp. 800-815, 2010.
    [9] Kim, S., Kim, C. H., You, B. J., and Oh, S. R., “Stable whole-body motion generation for humanoid robots to imitate human motions,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, MO, USA, pp. 2518-2524, 2009.
    [10] Chen, G., Xie, M., Xia, Z., Sun, L., Ji, J., Du, Z., and Wang, L., “Fast and accurate humanoid robot navigation guided by stereovision,” International Conference on Mechatronics and Automation (ICMA), Jilin, China, pp. 1910-1915, 2009.
    [11] Thobbi, A. and Sheng, W., "Imitation learning of hand gestures and its evaluation for humanoid robots," Proceedings of the 2010 IEEE Int. Conf. on Information and Automation (ICIA), Harbin, China, pp.60-65, 2010.
    [12] Liu, H. Y., Wang, W. J., Wang, R. J., Tung, C. W., Wang, P. J. and Chang, I. P., "Image recognition and force measurement application in the humanoid robot imitation," IEEE Transactions on Instrumentation and Measurement, vol. 61, no. 1, pp. 149-161, Jan. 2012.
    [13] Tarokh, M. and Mikyung, K., “Inverse kinematics of 7-DOF robots and limbs by decomposition and approximation,” IEEE Transactions on Robotics, vol. 23, no. 3, pp. 595-600, 2007.
    [14] Hwang, C. L. and Huang, J. Y., “Neural-network-based 3-D localization and inverse kinematics for target grasping of a humanoid robot by an active stereo vision system,” The 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia, pp. 1-8, 2012.
    [15] Juang, C. F., Chang, C. M., Wu, J. R., and Lee, D., “Computer vision-based human body segmentation and posture estimation,” IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, vol. 39, no. 1, pp. 119-133, 2009.
    [16] Wu, Q. Z., Cheng, H. Y., and Jeng, B. S., “Motion detection via change-point detection for cumulative histograms of ratio images,” Pattern Recognition Letters, vol. 26, no. 5, pp. 555-563, 2005.

    QR CODE