Basic Search / Detailed Display

Author: Sugiyanto
Sugiyanto
Thesis Title: 策略階段控制 (SSBC):一種新穎的人工智能方法,用於玩大老二,具有不同的階段和策略
Strategic Stage-Based Control (SSBC): A Novel AI Method for Playing Big Two with Distinct Stages and Strategies
Advisor: 戴文凱
Wen-Kai Tai
Committee: 戴文凱
Wen-Kai Tai
吳怡樂
Yi-Leh Wu
賴祐吉
Yu-Chi Lai
魏德樂
Der-Lor Way
范丙林
Ping-Lin Fan
Degree: 博士
Doctor
Department: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
Thesis Publication Year: 2024
Graduation Academic Year: 112
Language: 英文
Pages: 65
Keywords (in Chinese): AI代理大老二戰略階段控制
Keywords (in other languages): AI agent, Big Two, Strategic Stage-Based Control
Reference times: Clicks: 103Downloads: 0
Share:
School Collection Retrieve National Library Collection Retrieve Error Report

大老二(Big Two),一種在亞洲流行的棄牌型紙牌遊戲,通常由四名玩家進行。這項研究介紹了一種專門為玩大老二的AI代理設計的創新戰略階段控制(Strategic Stage-Based Control,SSBC)方法。這種SSBC方法能夠識別獲勝的舉動,並在不同階段指導戰略決策,包括開局、中盤和終盤。我們提出三個關鍵特徵來選擇最佳遊戲計劃:剩餘舉動次數、剩餘牌數和遊戲計劃得分。實驗結果表明,我們的戰略AI明顯優於隨機AI、傳統AI、基於規則的AI和人類玩家。此外,鑑於其在資源有限的情況下運作的能力,所提出的AI為小型遊戲工作室提供了一種可行的解決方案。


Big Two, a popular shedding-type card game from Asia, is typically played by four players. This research introduces a novel Strategic Stage-Based Control (SSBC) method specifically designed for an AI agent playing Big Two. This SSBC method can identify a winning move and guide strategic decision-making across distinct stages, including the opening, middlegame, and endgame. We propose three crucial features for selecting the optimal game plan: the number of remaining moves, the number of remaining cards, and the game plan score. Experimental results indicate that our strategic AI significantly outperforms randomized AI, conventional AI, rule-based AI, and human players. In addition, given its ability to operate with limited resources, the proposed AI presents a feasible solution for small-scale game studios.

Doctoral Dissertation Recommendation Form i Qualification Form by Doctoral Degree Examination Committee ii Abstract in Chinese iii Abstract in English iv Acknowledgements v Contents vi List of Figures ix List of Tables xi List of Algorithms xii Chapter 1. Introduction 1 1.1. Background and Motivation 1 1.2. Research Goals 2 1.3. Overview of Our Method 2 1.4. Contributions 3 1.5. Chapter Structure of This Dissertation 3 Chapter 2. Related Work 4 2.1. Big Two Card Game 4 2.2. AI Agents 7 2.3. Randomized AI 7 2.4. Conventional AI 8 2.5. Rule-based AI 10 2.6. Comparative Analysis of AI Agents 11 Chapter 3. Method 16 3.1. Defining Three Game Stages 16 3.2. Strategic Stage-Based Control (SSBC) Method 19 3.3. Scoring Combination 21 3.4. Opening Stage 23 3.4.1. Generating Opening Game Plan Profiles 23 3.4.2. Identifying a Winning Opening Move 24 3.4.3. Opening Strategy 26 3.5. Transition (Middlegame or Endgame) Stage 27 3.5.1. Generating Transition Game Plan Profiles 27 3.5.2. Identifying a Winning Move in a Control Position 28 3.5.3. Identifying a Winning Move in a Non-Control Position 30 3.6. Middlegame Stage 33 3.6.1. Middlegame Strategy in a Control Position 33 3.6.2. Middlegame Strategy in a Non-Control Position 35 3.6.3. High-Card Pressure Strategy 38 3.7. Endgame Stage 40 3.7.1. Endgame Strategy in a Control Position 40 3.7.2. Endgame Strategy in a Non-Control Position 41 Chapter 4. Experiment 44 4.1. Experiment Setup 44 4.2. Experimental Results 45 4.3. Experiment 1: Comparing the Performance of Strategic AI with Randomized AI 46 4.4. Experiment 2: Comparing the Performance of Strategic AI with Conventional AI 50 4.5. Experiment 3: Comparing the Performance of Strategic AI with Rule-based AI 52 4.6. Experiment 4: Comparing the Performance of Strategic AI with Human Players 54 Chapter 5. Conclusions and Future Work 59 5.1. Conclusions 59 5.2. Future Work 59 References 61

[1] Q.-Y. Yin, J. Yang, K.-Q. Huang, M.-J. Zhao, W.-C. Ni, B. Liang, Y. Huang, S. Wu, and L. Wang, “AI in Human-computer Gaming: Techniques, Challenges and Opportunities,” Mach. Intell. Res., vol. 20, no. 3, pp. 299–317, Jun. 2023, doi: 10.1007/s11633-022-1384-6.
[2] Sugiyanto, W.-K. Tai, and G. Fernando, “The Development and Evaluation of Web-based Multiplayer Games with Imperfect Information using WebSocket,” in 2019 12th International Conference on Information & Communication Technology and System (ICTS), Surabaya, Indonesia: IEEE, Jul. 2019, pp. 252–257. doi: 10.1109/ICTS.2019.8850943.
[3] Sugiyanto, G. Fernando, and W.-K. Tai, “A Rule-Based AI Method for an Agent Playing Big Two,” Applied Sciences, vol. 11, no. 9, p. 4206, May 2021, doi: 10.3390/app11094206.
[4] S. Edelkamp, “Dynamic Play via Suit Factorization Search in Skat,” in KI 2020: Advances in Artificial Intelligence, vol. 12325, U. Schmid, F. Klügl, and D. Wolter, Eds., in Lecture Notes in Computer Science, vol. 12325, Cham: Springer International Publishing, 2020, pp. 18–32. doi: 10.1007/978-3-030-58285-2_2.
[5] S. Edelkamp, “Representing and Reducing Uncertainty for Enumerating the Belief Space to Improve Endgame Play in Skat,” in European Conference on Artificial Intelligence, in Frontiers in Artificial Intelligence and Applications, vol. 325. IOS Press, 2020, pp. 395–402. doi: 10.3233/FAIA200118.
[6] Y. Gao, W. Li, Y. Xiao, M. N. A. Khalid, and H. Iida, “Nature of Attractive Multiplayer Games: Case Study on China’s Most Popular Card Game—DouDiZhu,” Information, vol. 11, no. 3, p. 141, Mar. 2020, doi: 10.3390/info11030141.
[7] S. Li, S. Li, H. Cao, K. Meng, and M. Ding, “Study on the Strategy of Playing Doudizhu Game Based on Multirole Modeling,” Complexity, vol. 2020, pp. 1–9, Oct. 2020, doi: 10.1155/2020/1764594.
[8] J. Kowalski, “Embedding a Card Game Language into a General Game Playing Language,” in Starting AI Researchers’ Symposium, in Frontiers in Artificial Intelligence and Applications, vol. 264. IOS Press, 2014, pp. 161–170. doi: 10.3233/978-1-61499-421-3-161.
[9] F. De Mesentier Silva, J. Togelius, F. Lantz, and A. Nealen, “Generating Novice Heuristics for Post-Flop Poker,” in 2018 IEEE Conference on Computational Intelligence and Games (CIG), Maastricht: IEEE, Aug. 2018, pp. 1–8. doi: 10.1109/CIG.2018.8490415.
[10] Y. Zhang, X. Ding, and Z. Chen, “Solving Legends of the Three Kingdoms based on Hierarchical Macro Strategy Model,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, Prague Czech Republic: ACM, Jul. 2019, pp. 15–16. doi: 10.1145/3319619.3326748.
[11] C.-F. Shi and W.-K. Tai, “A Study on Gameplay Guaranteed Game AI: A Case Study on Big Two Card Game,” Master Thesis, National Taiwan University of Science and Technology, Taiwan, 2022.
[12] H.-F. Peng and W.-K. Tai, “Extension and Improvement for the Gameplay Guaranteed System: A Case Study on Big Two,” Master Thesis, National Taiwan University of Science and Technology, Taiwan, 2023.
[13] M. Wakatsuki, Y. Kado, Y. Takeuchi, S. Okubo, and T. Nishino, “What are the Characteristics of the Card Game Daihinmin?,” in 2019 8th International Congress on Advanced Applied Informatics (IIAI-AAI), Toyama, Japan: IEEE, Jul. 2019, pp. 587–592. doi: 10.1109/IIAI-AAI.2019.00125.
[14] S. Okubo, Y. Kado, Y. Takeuchi, M. Wakatsuki, and T. Nishino, “Toward a Statistical Characterization of Computer Daihinmin,” International Journal of Software Innovation, vol. 7, no. 1, pp. 63–79, Jan. 2019, doi: 10.4018/IJSI.2019010104.
[15] A. M. Mora, F. Aisa, P. García-Sánchez, P. Á. Castillo, and J. J. Merelo, “Modelling a Human-Like Bot in a First Person Shooter Game,” International Journal of Creative Interfaces and Computer Graphics, vol. 6, no. 1, pp. 21–37, Jan. 2015, doi: 10.4018/IJCICG.2015010102.
[16] X. Li, S. Wang, Z. Lv, Y. Li, and L. Wu, “Strategy Research Based on Chess Shapes for Tibetan JIU Computer Game,” ICG, vol. 40, no. 3, pp. 318–328, Mar. 2019, doi: 10.3233/ICG-180058.
[17] N. Brown and T. Sandholm, “Superhuman AI for Heads-up No-limit Poker: Libratus Beats Top Professionals,” Science, vol. 359, no. 6374, pp. 418–424, Jan. 2018, doi: 10.1126/science.aao1733.
[18] A. R. D. Cruz, F. G. Guimaraes, and R. H. C. Takahashi, “Comparing Strategies to Play a 2-Sided Dominoes Game,” in 2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence, Ipojuca, Brazil: IEEE, Sep. 2013, pp. 310–316. doi: 10.1109/BRICS-CCI-CBIC.2013.59.
[19] C. Xiao, A. Primanita, M. N. A. Khalid, and H. Iida, “Analysis of Card Collection Game ‘Hearthstone,’” in 2019 International Conference on Technologies and Applications of Artificial Intelligence (TAAI), Kaohsiung, Taiwan: IEEE, Nov. 2019, pp. 1–6. doi: 10.1109/TAAI48200.2019.8959847.
[20] R. Montoliu, R. D. Gaina, D. Pérez-Liebana, D. Delgado, and S. Lucas, “Efficient Heuristic Policy Optimisation for a Challenging Strategic Card Game,” in Applications of Evolutionary Computation, vol. 12104, P. A. Castillo, J. L. Jiménez Laredo, and F. Fernández De Vega, Eds., in Lecture Notes in Computer Science, vol. 12104, Cham: Springer International Publishing, 2020, pp. 403–418. doi: 10.1007/978-3-030-43722-0_26.
[21] T. Nakamichi and T. Ito, “Adjusting the Evaluation Function for Weakening the Competency Level of a Computer Shogi Program,” ICG, vol. 40, no. 1, pp. 15–31, Jun. 2018, doi: 10.3233/ICG-180042.
[22] T. K. Meng, R. K. Y. Siong, J. A. K. Yong, and I. L. W. Chiang, “3 << 2: Dai-di Analysis,” presented at the Pagat, 2000, pp. 1–28.
[23] P. Haw and S. Yongzhi, “Investigating a Winning Strategy for Big Two,” in Singapore Mathematics Project Festival 2009, Bukit Timah, Singapore, Mar. 2009, pp. 1–27.
[24] M. Eicholtz, S. Moss, M. Traino, and C. Roberson, “Heisenbot: A Rule-Based Game Agent for Gin Rummy,” in Proceedings of the AAAI Conference on Artificial Intelligence, May 2021, pp. 15489–15495. doi: 10.1609/aaai.v35i17.17823.
[25] M. J. H. Van Den Bergh, A. Hommelberg, W. A. Kosters, and F. M. Spieksma, “Aspects of the Cooperative Card Game Hanabi,” in BNAIC 2016: Artificial Intelligence, vol. 765, T. Bosse and B. Bredeweg, Eds., in Communications in Computer and Information Science, vol. 765, Cham: Springer International Publishing, 2017, pp. 93–105. doi: 10.1007/978-3-319-67468-1_7.
[26] H. C. Siu, J. Peña, E. Chen, Y. Zhou, V. Lopez, K. Palko, K. Chang, and R. Allen, “Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. W. Vaughan, Eds., Curran Associates, Inc., 2021, pp. 16183–16195.
[27] M. Sidji, W. Smith, and M. J. Rogerson, “The Hidden Rules of Hanabi: How Humans Outperform AI Agents,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg Germany: ACM, Apr. 2023, pp. 1–16. doi: 10.1145/3544548.3581550.
[28] S. Manuel, D. Rajaratnam, and M. Thielscher, “Encoding Epistemic Strategies for General Game Playing,” in PRICAI 2019: Trends in Artificial Intelligence, vol. 11670, A. C. Nayak and A. Sharma, Eds., in Lecture Notes in Computer Science, vol. 11670, Cham: Springer International Publishing, 2019, pp. 555–567. doi: 10.1007/978-3-030-29908-8_44.
[29] T.-L. Chang, Sugiyanto, W.-C. Pan, W.-K. Tai, C.-C. Chang, and D.-L. Way, “Opponent Behavior Prediction in a Multi-Player Game with Imperfect Information,” in 2020 IEEE Graphics and Multimedia (GAME), Kota Kinabalu, Malaysia: IEEE, Nov. 2020, pp. 43–48. doi: 10.1109/GAME50158.2020.9314990.
[30] X. Zhang, L. Liu, C. Gan, and X. Yang, “Research On Mahjong Game Strategy Combining Hand Tiles Optimization and Situation Search,” in 2022 34th Chinese Control and Decision Conference (CCDC), Hefei, China: IEEE, Aug. 2022, pp. 4236–4240. doi: 10.1109/CCDC55256.2022.10033435.
[31] C. Andrade, “The P Value and Statistical Significance: Misunderstandings, Explanations, Challenges, and Alternatives,” Indian Journal of Psychological Medicine, vol. 41, no. 3, pp. 210–215, May 2019, doi: 10.4103/IJPSYM.IJPSYM_193_19.
[32] G. M. Sullivan and R. Feinn, “Using Effect Size—or Why the P Value Is Not Enough,” Journal of Graduate Medical Education, vol. 4, no. 3, pp. 279–282, Sep. 2012, doi: 10.4300/JGME-D-12-00156.1.

無法下載圖示 Full text public date 2034/02/16 (Intranet public)
Full text public date 2122/02/16 (Internet public)
Full text public date 2122/02/16 (National library)
QR CODE