Author: |
温國平 Gerry Fernando |
---|---|
Thesis Title: |
用於玩大老二遊戲的Rule-based人工智能 A Rule-based AI Agent for Playing Game Big Two |
Advisor: |
戴文凱
Wen-Kai Tai |
Committee: |
范欽雄
Chin-Shyurng Fahn 江佩穎 Pei-Ying Chiang |
Degree: |
碩士 Master |
Department: |
電資學院 - 資訊工程系 Department of Computer Science and Information Engineering |
Thesis Publication Year: | 2020 |
Graduation Academic Year: | 108 |
Language: | 英文 |
Pages: | 75 |
Keywords (in Chinese): | 遊戲機器人 、人工智能 、rule-based 人工智能 、多人卡牌遊戲 、大老二 |
Keywords (in other languages): | game bot, AI agent, rule-based AI, multiplayer card game, Big Two |
Reference times: | Clicks: 839 Downloads: 7 |
Share: |
School Collection Retrieve National Library Collection Retrieve Error Report |
大老二是一個在亞洲盛行的多人卡牌遊戲。本研究提出一個以 rule-based 為基礎的人工智能來玩大老二。此方法以剩餘的手牌張數和控制回合為基礎來發 展出規則。當手中的牌多於四張時,會從已經分過層級的手牌中由低層級至高層 級的順序來打出。而當人工智能手中的牌剩二到四張時,會另外使用相對應的規 則。當致勝的步數小於等於三時,會有致勝策略模組提出能馬上獲勝的方針。我 們也為人工智能在沒有控制權時,設計了是否要保留牌和拆牌的規則。實驗結果 顯示我們所研發的人工智能可以把大老二玩得很好,甚至超越人類玩家,在實驗 中人工智能的勝率在 24.47%到 27.4%之間並且有使得獲勝得分最大化和在局勢不 好時使得剩餘卡牌張數最小化的機制。
Big Two is a popular multiplayer card game in Asia. This research proposes a rule-based AI to develop an AI agent for playing Big Two. The proposed method drives rules based on cards left and control position. The rules for two cards left, three cards left and four cards left are used to discard the card combination based on the number of cards remained on the agent’s hand. The rules for more than four cards left prioritize discarding the card combination in the classified cards from lower class to higher class. A winning strategy provides guidelines to get win immediately if the winning move less than or equal to 3. We also design the rules for the AI agent without control for holding cards and splitting cards. The experimental results show that our proposed AI agent could play Big Two well and outperform human players, presenting a winning rate around 24.47% to 27.4% with the capability to maximize the winning score and minimize the number of cards left when the chance of winning is low.
References
[1] M. Newborn, Beyond Deep Blue: Chess in the Stratosphere, 1st ed. London: Springer-Verlag London, 2011.
[2] Sugiyanto, W. K. Tai, and G. Fernando, “The Development and Evaluation of Web-based Multiplayer Games with Imperfect Information using WebSocket,” in IEEE 2019 12th International Conference on Information & Communication Technology and System (ICTS), 2019, pp. 252–257.
[3] N. Kirby, Introduction to Game AI. Boston: Course Technology, a part of Cengage Learning, 2011.
[4] R. Small and C. B. Congdon, “Agent smith: Towards an evolutionary rule-based agent for interactive dynamic games,” in 2009 IEEE Congress on Evolutionary Computation, CEC 2009, 2009, pp. 660–666.
[5] S. Bojarski and C. B. Congdon, “REALM: A Rule-Based Evolutionary Computation Agent that Learns to Play Mario,” in Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games, CIG2010, 2010, pp. 83–90.
[6] R. B. Ali, M. Ali, and A. H. Farooqi, “Analysis of rule based look-ahead strategy using Pacman Testbed,” in Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, 2011, vol. 3, pp. 480–483.
[7] D. J. Gagne and C. B. Congdon, “FRIGHT: A Flexible Rule-Based Intelligent Ghost Team for Ms. Pac-Man,” in 2012 IEEE Conference on Computational Intelligence and Games, CIG 2012, 2012, pp. 273–280.
[8] J. Rushing and J. Tiller, “Rule Learning Approaches for Symmetric Multiplayer Games,” in Proceedings of CGAMES’2011 USA - 16th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational and Serious Games, 2011, pp. 121–125.
[9] S. Holland, J. Pitt, D. Sanderson, and D. Busquets, “Reasoning and Reflection in the Game of Nomic: Self-Organising Self-Aware Agents with Mutable Rule-Sets,” in Proceedings - IEEE 7th International Conference on Self-Adaptation and Self-Organizing Systems Workshops, SASOW 2013, 2014, pp. 101–106.
[10] A. Srisuphab and P. Silapachote, “Rule-Based Systems Made Easy with Battleship Games: A Well-Received Classroom Experience,” in Proceedings of 2013 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE), 2013, no. August, pp. 560–564.
[11] L. F. Teófilo, L. P. Reis, H. L. Cardoso, and P. Mendes, “Rule based strategies for large extensive-form games: A specification language for No-Limit Texas Hold’em agents,” Comput. Sci. Inf. Syst., vol. 11, no. 4, pp. 1249–1269, 2014.
[12] D. V. Rao and J. Kaur, “A Fuzzy Rule-Based Approach to Design Game Rules in a Mission Planning and Evaluation System,” Artif. Intell. Appl. Innov. AIAI 2010. IFIP Adv. Inf. Commun. Technol., vol. 339, pp. 53–61, 2010.
[13] C. A. Ballinger, D. A. Turner, and A. I. Concepcion, “Artificial Intelligence Design in a Multiplayer Online Role Playing Game,” in Proceedings - 2011 8th International Conference on Information Technology: New Generations, ITNG 2011, 2011, pp. 816–821.
[14] N. Sato, S. Temsiririrkkul, S. Sone, and K. Ikeda, “Adaptive Fighting Game Computer Player by Switching Multiple Rule-based Controllers,” in Proceedings - 3rd International Conference on Applied Computing and Information Technology and 2nd International Conference on Computational Science and Intelligence, ACIT-CSI 2015, 2015, pp. 52–59.
[15] V. Vorachart and H. Takagi, “Evolving Fuzzy Logic Rule-based Game Player Model for Game Development,” Int. J. Innov. Comput. Inf. Control, vol. 13, no. 6, pp. 1941–1951, 2017.
[16] M. J. H. van den Bergh, A. Hommelberg, W. A. Kosters, and F. M. Spieksma, “Aspects of the Cooperative Card Game Hanabi,” in Benelux Conference on Artificial Intelligence, 2017, vol. 765, pp. 32–46.
[17] K. Castro-Wunsch, W. Maga, and C. Anton, “BeeMo, a Monte Carlo Simulation Agent for Playing Parameterized Poker Squares,” in Proceedings of the Sixth Conference on Artificial Intelligence (AAAI 2016), 2016, pp. 4071–4074.
[18] K. Nimoto, K. Takahashi, and M. Inaba, “Construction of a Player Agent for a Card Game Using an Ensemble Method,” in Procedia Computer Science: 20th International Conference on Knowledge Based and Intelligent Information and Engineering Systems, 2016, vol. 96, pp. 772–781.
[19] K. Nimoto, K. Takahashi, and M. Inaba, “Improvement of Agent Learning for a Card Game Based on Multi-channel ART Networks,” J. Comput., vol. 11, no. 4, pp. 341–352, 2016.
[20] C. D. Ward and P. I. Cowling, “Monte Carlo Search Applied to Card Selection in Magic: The Gathering,” in CIG2009 - 2009 IEEE Symposium on Computational Intelligence and Games, 2009, pp. 9–16.
[21] D. Whitehouse, P. I. Cowling, and E. J. Powley, “Integrating Monte Carlo Tree Search with Knowledge-Based Methods to Create Engaging Play in a Commercial Mobile Game,” in Proceedings of the 9th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2013, 2013, pp. 100–106.
[22] D. Robilliard, C. Fonlupt, and F. Teytaud, “Monte-Carlo Tree Search for the Game of ‘7 Wonders,’” Commun. Comput. Inf. Sci., vol. 504, pp. 64–77, 2014.
[23] H. Osawa, “Solving Hanabi: Estimating Hands by Opponent’s Actions in Cooperative Game with Incomplete Information,” in Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015, vol. WS-15-07, pp. 37–43.
[24] S. Di Palma and P. L. Lanzi, “Traditional Wisdom and Monte Carlo Tree Search Face-to-Face in the Card Game Scopone,” IEEE Trans. Games, vol. 10, no. 3, pp. 317–332, 2018.
[25] J. Flesch, J. Kuipers, A. M. Yaakovi, G. Schoenmakers, E. Solan, and K. Vrieze, “Perfect Information Games with Lower-Semicontinuous Payoffs,” Math. Oper. Res., vol. 35, no. 4, pp. 742–755, 2010.
[26] K. Yoshimura, T. Hochin, and H. Nomiya, “Estimation of Rates Arriving at the Winning Hands in Multi-Player Games with Imperfect Information,” in 4th International Conference on Applied Computing and Information Technology/3rd International Conference on Computational Science/Intelligence and Applied Informatics/1st International Conference on Big Data, Cloud Computing, Data Science & Engineering, 2016, pp. 99–104.
[27] M. Konishi, S. Okubo, T. Nishino, and M. Wakatsuki, “A Decision Tree Analysis of a Multi-Player Card Game With Imperfect Information,” Int. J. Softw. Innov., vol. 6, no. 3, pp. 1–17, 2018.
[28] H. Charlesworth, “Application of Self-Play Reinforcement Learning to a Four-Player Game of Imperfect Information,” 2018.
[29] H. Charlesworth, “Application of self-play deep reinforcement learning to ‘Big 2’, a four-player game of imperfect information,” in AAAI-19 Workshop on Reinforcement Learning in Games, 2018.
[30] T. K. Meng, R. K. Y. Siong, J. A. K. Yong, and I. L. W. Chiang, “3 << 2 : Dai-di Analysis,” 2000.