Basic Search / Detailed Display

Author: 温國平
Gerry Fernando
Thesis Title: 用於玩大老二遊戲的Rule-based人工智能
A Rule-based AI Agent for Playing Game Big Two
Advisor: 戴文凱
Wen-Kai Tai
Committee: 范欽雄
Chin-Shyurng Fahn
Pei-Ying Chiang
Degree: 碩士
Department: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
Thesis Publication Year: 2020
Graduation Academic Year: 108
Language: 英文
Pages: 75
Keywords (in Chinese): 遊戲機器人人工智能rule-based 人工智能多人卡牌遊戲大老二
Keywords (in other languages): game bot, AI agent, rule-based AI, multiplayer card game, Big Two
Reference times: Clicks: 403Downloads: 1
School Collection Retrieve National Library Collection Retrieve Error Report

大老二是一個在亞洲盛行的多人卡牌遊戲。本研究提出一個以 rule-based 為基礎的人工智能來玩大老二。此方法以剩餘的手牌張數和控制回合為基礎來發 展出規則。當手中的牌多於四張時,會從已經分過層級的手牌中由低層級至高層 級的順序來打出。而當人工智能手中的牌剩二到四張時,會另外使用相對應的規 則。當致勝的步數小於等於三時,會有致勝策略模組提出能馬上獲勝的方針。我 們也為人工智能在沒有控制權時,設計了是否要保留牌和拆牌的規則。實驗結果 顯示我們所研發的人工智能可以把大老二玩得很好,甚至超越人類玩家,在實驗 中人工智能的勝率在 24.47%到 27.4%之間並且有使得獲勝得分最大化和在局勢不 好時使得剩餘卡牌張數最小化的機制。

Big Two is a popular multiplayer card game in Asia. This research proposes a rule-based AI to develop an AI agent for playing Big Two. The proposed method drives rules based on cards left and control position. The rules for two cards left, three cards left and four cards left are used to discard the card combination based on the number of cards remained on the agent’s hand. The rules for more than four cards left prioritize discarding the card combination in the classified cards from lower class to higher class. A winning strategy provides guidelines to get win immediately if the winning move less than or equal to 3. We also design the rules for the AI agent without control for holding cards and splitting cards. The experimental results show that our proposed AI agent could play Big Two well and outperform human players, presenting a winning rate around 24.47% to 27.4% with the capability to maximize the winning score and minimize the number of cards left when the chance of winning is low.

List of Contents Abstract in Chinese i Abstract ii Acknowledgements iii List of Contents iv List of Figures vi List of Tables vii List of Algorithms viii 1 Introduction 1 1.1 Background and Motivation 1 1.2 Objectives and Hypothesis 2 1.3 Proposed Method 3 1.4 Contributions 4 1.5 Thesis Structure 5 2 Related Works 6 2.1 Perfect and Imperfect Information Game 6 2.2 Rule-based AI on Poker 8 2.3 Reinforcement Learning on Big Two Game 9 2.4 Rule-based on Big Two 11 2.5 Model Comparison 12 3 Proposed Method 13 3.1 Card Classification 17 3.2 Rules for Two Cards Left 25 3.3 Rules for Three Cards Left 26 3.4 Rules for Four Cards Left 28 3.5 Rules for More Than Four Cards Left 30 3.6 Winning Strategy 31 3.7 Rules for the agent if not control 32 3.8 Additional functions for five-card combination 37 4 Experimental Results and Analysis 41 4.1 Test Cases for Rules 41 4.1.1 Two Cards Left 41 4.1.2 Three Cards Left 43 4.1.3 Four Cards Left 44 4.1.4 Winning Strategy 45 4.1.5 Hold-Back Function 47 4.1.6 Split-Card Function 48 4.1.7 Best Five-Card Function 49 4.2 Experiment 51 4.3 Execution Time 57 4.4 User Study and Analysis 58 4.5 Loss Analysis 59 5 Conclusion and Future Work 60 5.1 Conclusion 60 5.2 Future Work 61 References 62

[1] M. Newborn, Beyond Deep Blue: Chess in the Stratosphere, 1st ed. London: Springer-Verlag London, 2011.
[2] Sugiyanto, W. K. Tai, and G. Fernando, “The Development and Evaluation of Web-based Multiplayer Games with Imperfect Information using WebSocket,” in IEEE 2019 12th International Conference on Information & Communication Technology and System (ICTS), 2019, pp. 252–257.
[3] N. Kirby, Introduction to Game AI. Boston: Course Technology, a part of Cengage Learning, 2011.
[4] R. Small and C. B. Congdon, “Agent smith: Towards an evolutionary rule-based agent for interactive dynamic games,” in 2009 IEEE Congress on Evolutionary Computation, CEC 2009, 2009, pp. 660–666.
[5] S. Bojarski and C. B. Congdon, “REALM: A Rule-Based Evolutionary Computation Agent that Learns to Play Mario,” in Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games, CIG2010, 2010, pp. 83–90.
[6] R. B. Ali, M. Ali, and A. H. Farooqi, “Analysis of rule based look-ahead strategy using Pacman Testbed,” in Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, 2011, vol. 3, pp. 480–483.
[7] D. J. Gagne and C. B. Congdon, “FRIGHT: A Flexible Rule-Based Intelligent Ghost Team for Ms. Pac-Man,” in 2012 IEEE Conference on Computational Intelligence and Games, CIG 2012, 2012, pp. 273–280.
[8] J. Rushing and J. Tiller, “Rule Learning Approaches for Symmetric Multiplayer Games,” in Proceedings of CGAMES’2011 USA - 16th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational and Serious Games, 2011, pp. 121–125.
[9] S. Holland, J. Pitt, D. Sanderson, and D. Busquets, “Reasoning and Reflection in the Game of Nomic: Self-Organising Self-Aware Agents with Mutable Rule-Sets,” in Proceedings - IEEE 7th International Conference on Self-Adaptation and Self-Organizing Systems Workshops, SASOW 2013, 2014, pp. 101–106.
[10] A. Srisuphab and P. Silapachote, “Rule-Based Systems Made Easy with Battleship Games: A Well-Received Classroom Experience,” in Proceedings of 2013 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE), 2013, no. August, pp. 560–564.
[11] L. F. Teófilo, L. P. Reis, H. L. Cardoso, and P. Mendes, “Rule based strategies for large extensive-form games: A specification language for No-Limit Texas Hold’em agents,” Comput. Sci. Inf. Syst., vol. 11, no. 4, pp. 1249–1269, 2014.
[12] D. V. Rao and J. Kaur, “A Fuzzy Rule-Based Approach to Design Game Rules in a Mission Planning and Evaluation System,” Artif. Intell. Appl. Innov. AIAI 2010. IFIP Adv. Inf. Commun. Technol., vol. 339, pp. 53–61, 2010.
[13] C. A. Ballinger, D. A. Turner, and A. I. Concepcion, “Artificial Intelligence Design in a Multiplayer Online Role Playing Game,” in Proceedings - 2011 8th International Conference on Information Technology: New Generations, ITNG 2011, 2011, pp. 816–821.
[14] N. Sato, S. Temsiririrkkul, S. Sone, and K. Ikeda, “Adaptive Fighting Game Computer Player by Switching Multiple Rule-based Controllers,” in Proceedings - 3rd International Conference on Applied Computing and Information Technology and 2nd International Conference on Computational Science and Intelligence, ACIT-CSI 2015, 2015, pp. 52–59.
[15] V. Vorachart and H. Takagi, “Evolving Fuzzy Logic Rule-based Game Player Model for Game Development,” Int. J. Innov. Comput. Inf. Control, vol. 13, no. 6, pp. 1941–1951, 2017.
[16] M. J. H. van den Bergh, A. Hommelberg, W. A. Kosters, and F. M. Spieksma, “Aspects of the Cooperative Card Game Hanabi,” in Benelux Conference on Artificial Intelligence, 2017, vol. 765, pp. 32–46.
[17] K. Castro-Wunsch, W. Maga, and C. Anton, “BeeMo, a Monte Carlo Simulation Agent for Playing Parameterized Poker Squares,” in Proceedings of the Sixth Conference on Artificial Intelligence (AAAI 2016), 2016, pp. 4071–4074.
[18] K. Nimoto, K. Takahashi, and M. Inaba, “Construction of a Player Agent for a Card Game Using an Ensemble Method,” in Procedia Computer Science: 20th International Conference on Knowledge Based and Intelligent Information and Engineering Systems, 2016, vol. 96, pp. 772–781.
[19] K. Nimoto, K. Takahashi, and M. Inaba, “Improvement of Agent Learning for a Card Game Based on Multi-channel ART Networks,” J. Comput., vol. 11, no. 4, pp. 341–352, 2016.
[20] C. D. Ward and P. I. Cowling, “Monte Carlo Search Applied to Card Selection in Magic: The Gathering,” in CIG2009 - 2009 IEEE Symposium on Computational Intelligence and Games, 2009, pp. 9–16.
[21] D. Whitehouse, P. I. Cowling, and E. J. Powley, “Integrating Monte Carlo Tree Search with Knowledge-Based Methods to Create Engaging Play in a Commercial Mobile Game,” in Proceedings of the 9th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2013, 2013, pp. 100–106.
[22] D. Robilliard, C. Fonlupt, and F. Teytaud, “Monte-Carlo Tree Search for the Game of ‘7 Wonders,’” Commun. Comput. Inf. Sci., vol. 504, pp. 64–77, 2014.
[23] H. Osawa, “Solving Hanabi: Estimating Hands by Opponent’s Actions in Cooperative Game with Incomplete Information,” in Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015, vol. WS-15-07, pp. 37–43.
[24] S. Di Palma and P. L. Lanzi, “Traditional Wisdom and Monte Carlo Tree Search Face-to-Face in the Card Game Scopone,” IEEE Trans. Games, vol. 10, no. 3, pp. 317–332, 2018.
[25] J. Flesch, J. Kuipers, A. M. Yaakovi, G. Schoenmakers, E. Solan, and K. Vrieze, “Perfect Information Games with Lower-Semicontinuous Payoffs,” Math. Oper. Res., vol. 35, no. 4, pp. 742–755, 2010.
[26] K. Yoshimura, T. Hochin, and H. Nomiya, “Estimation of Rates Arriving at the Winning Hands in Multi-Player Games with Imperfect Information,” in 4th International Conference on Applied Computing and Information Technology/3rd International Conference on Computational Science/Intelligence and Applied Informatics/1st International Conference on Big Data, Cloud Computing, Data Science & Engineering, 2016, pp. 99–104.
[27] M. Konishi, S. Okubo, T. Nishino, and M. Wakatsuki, “A Decision Tree Analysis of a Multi-Player Card Game With Imperfect Information,” Int. J. Softw. Innov., vol. 6, no. 3, pp. 1–17, 2018.
[28] H. Charlesworth, “Application of Self-Play Reinforcement Learning to a Four-Player Game of Imperfect Information,” 2018.
[29] H. Charlesworth, “Application of self-play deep reinforcement learning to ‘Big 2’, a four-player game of imperfect information,” in AAAI-19 Workshop on Reinforcement Learning in Games, 2018.
[30] T. K. Meng, R. K. Y. Siong, J. A. K. Yong, and I. L. W. Chiang, “3 << 2 : Dai-di Analysis,” 2000.