簡易檢索 / 詳目顯示

研究生: Ivander William
Ivander William
論文名稱: 基於SUMO與Q-Learning 結合加權演算法進行路口交通流量最佳化研究
A Study on Intersection Traffic Flow Optimization Using SUMO and Q Learning with Weighted Algorithm
指導教授: 周碩彥
Shuo-Yan Chou
郭伯勳
Po-Hsun Kuo
口試委員: 周碩彥
郭伯勳
羅士哲
學位類別: 碩士
Master
系所名稱: 管理學院 - 工業管理系
Department of Industrial Management
論文出版年: 2023
畢業學年度: 112
語文別: 英文
論文頁數: 42
外文關鍵詞: Sumo, Q Learning, Simulation, Agent Based Modelling, Traffic Flow
相關次數: 點閱:30下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  • This study addresses the challenge of optimizing traffic flow at intersections with the dual objectives of reducing average waiting times and increasing the number of vehicles passing through. To achieve this, we employ a combination of Simulation of Urban MObility (SUMO) and Q-Learning with a Weighted Algorithm.
    Our methodology involves simulating various traffic scenarios using SUMO, a state-of-the-art traffic simulation tool, to model real-world traffic conditions. We implement Q-Learning, a reinforcement learning technique, to dynamically adjust traffic signal timings at the intersections. The novel aspect of our approach lies in the incorporation of a Weighted Algorithm, which prioritizes minimizing waiting times while balancing the overall traffic volume to achieve reduce trip duration for the traffics.
    Our findings demonstrate a significant improvement in traffic flow optimization compared to traditional signal control methods. The trip duration at intersections are noticeably reduced, leading to enhanced traffic efficiency. Simultaneously, we observe a notable increase in the number of vehicles successfully passing through intersections, contributing to a smoother and more efficient traffic system.
    The implications of our research extend to urban planning and transportation management, offering a practical solution to mitigate traffic congestion and improve overall traffic flow in urban areas. This study represents a step forward in the quest for sustainable and efficient urban transportation systems.

    TABLE OF CONTENTS CHAPTER 1 INTRODUCTION 1 1.1 BACKGROUND 1 1.2 OBJECTIVES 2 1.3 SCOPE AND LIMITATIONS 2 1.4 ORGANIZATIONS OF THESIS 3 CHAPTER 2 LITERATURE REVIEW 5 2.1 TYPE OF ROAD USERS 5 2.2 OBJECTIVES 6 2.3 SOLUTION METHOD 7 CHAPTER 3 METHODOLOGY 10 3.1 SIMULATION OF URBAN MOBILITY (SUMO) 10 3.2 REINFORCEMENT LEARNING IN TRAFFIC SIGNAL CONTROL 16 CHAPTER 4 RESULTS AND DISCUSSION 22 4.1 DATA PREPARATION 22 4.2 DATA DISTRIBUTION 23 4.3 SCENARIO 24 4.4 SIMULATION 25 4.5 SIMULATION RESULT ANALYSIS 30 CHAPTER 5 CONCLUSION & FUTURE WORK 38 5.1 CONCLUSION 38 5.2 RECOMMENDATION AND FUTURE RESEARCH 38 REFERENCES 40

    1. United Nations Department of Economic and Social Affairs, World Urbanization Prospects: The 2018 Revision. 2019: United Nations.
    2. Vidali, A., et al. A Deep Reinforcement Learning Approach to Adaptive Traffic Lights Management. in Woa. 2019.
    3. Oertel, R., et al., VITAL-Vehicle-actuated intelligent traffic signal control. 2013.
    4. Improta, G. and G. Cantarella, Control system design for an individual signalized junction. Transportation Research Part B: Methodological, 1984. 18(2): p. 147-167.
    5. Niittymäki, J. and M. Pursula, Signal control using fuzzy logic. Fuzzy sets and systems, 2000. 116(1): p. 11-22.
    6. Aslani, M., M.S. Mesgari, and M. Wiering, Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events. Transportation Research Part C: Emerging Technologies, 2017. 85: p. 732-752.
    7. Hu, J., B.B. Park, and Y.-J. Lee, Coordinated transit signal priority supporting transit progression under connected vehicle technology. Transportation Research Part C: Emerging Technologies, 2015. 55: p. 393-408.
    8. Han, K., et al., A robust optimization approach for dynamic traffic signal control with emission considerations. Transportation Research Part C: Emerging Technologies, 2016. 70: p. 3-26.
    9. Choi, S., et al., Field implementation feasibility study of cumulative travel‐time responsive (CTR) traffic signal control algorithm. Journal of Advanced Transportation, 2016. 50(8): p. 2226-2238.
    10. Lee, S., S. Wong, and P. Varaiya, Group-based hierarchical adaptive traffic-signal control part I: Formulation. Transportation research part B: methodological, 2017. 105: p. 1-18.
    11. Li, L., W. Huang, and H.K. Lo, Adaptive coordinated traffic control for stochastic demand. Transportation Research Part C: Emerging Technologies, 2018. 88: p. 31-51.
    12. Jin, J. and X. Ma, A group-based traffic signal control with adaptive learning ability. Engineering applications of artificial intelligence, 2017. 65: p. 282-293.
    13. Dunne, M.C. and R.B. Potts, Algorithm for traffic control. Operations Research, 1964. 12(6): p. 870-881.
    14. Ross, D., R. Sandys, and J. Schlaefli, A computer control scheme for critical-intersection control in an urban network. Transportation Science, 1971. 5(2): p. 141-160.
    15. Abdulhai, B., R. Pringle, and G.J. Karakoulas, Reinforcement learning for true adaptive traffic signal control. Journal of Transportation Engineering, 2003. 129(3): p. 278-285.
    16. Chang, T.-H. and G.-Y. Sun, Modeling and optimization of an oversaturated signalized network. Transportation Research Part B: Methodological, 2004. 38(8): p. 687-707.
    17. Lin, W.-H. and C. Wang, An enhanced 0-1 mixed-integer LP formulation for traffic signal control. IEEE Transactions on Intelligent transportation systems, 2004. 5(4): p. 238-245.
    18. Christofa, E., K. Ampountolas, and A. Skabardonis, Arterial traffic signal optimization: A person-based approach. Transportation Research Part C: Emerging Technologies, 2016. 66: p. 27-47.
    19. Portilla, C., et al., Model-based predictive control for bicycling in urban intersections. Transportation research part C: emerging technologies, 2016. 70: p. 27-41.
    20. Michalopoulos, P.G. and G. Stephanopoulos, Oversaturated signal systems with queue length constraints—II: Systems of intersections. Transportation Research, 1977. 11(6): p. 423-428.
    21. D'ans, G. and D. Gazis, Optimal control of oversaturated store-and-forward transportation networks. Transportation Science, 1976. 10(1): p. 1-19.
    22. Lo, H.K., A novel traffic signal control formulation. Transportation Research Part A: Policy and Practice, 1999. 33(6): p. 433-448.
    23. De Schutter, B. and B. De Moor, Optimal traffic light control for a single intersection. European Journal of Control, 1998. 4(3): p. 260-276.
    24. Jin, J. and X. Ma, Adaptive group-based signal control by reinforcement learning. Transportation Research Procedia, 2015. 10: p. 207-216.
    25. Yin, Y., Robust optimal traffic signal timing. Transportation Research Part B: Methodological, 2008. 42(10): p. 911-924.
    26. Macal, C.M. and M.J. North. Tutorial on agent-based modeling and simulation part 2: how to model with agents. in Proceedings of the 2006 Winter simulation conference. 2006. IEEE.
    27. Arifin, S.N., G.R. Madey, and F.H. Collins, Spatial agent-based simulation modeling in public health: design, implementation, and applications for malaria epidemiology. 2016: John Wiley & Sons.
    28. Salgado, M. and N. Gilbert, Agent based modelling, in Handbook of quantitative methods for educational research. 2013, Brill. p. 247-265.
    29. Nguyen, J., et al., An overview of agent-based traffic simulators. , 2021. 12: p. 100486.
    30. Taylor, S., Agent-based modeling and simulation. 2014: Springer.
    31. Oertel, R., et al., VITAL-Verkehrsabhängig intelligente Steuerung von Lichtsignalanlagen. Straßenverkehrstechnik, 2018: p. 631-638.
    32. Bellifemine, F., et al., JADE—a java agent development framework. Multi-agent programming: Languages, platforms and applications, 2005: p. 125-147.
    33. Behrisch, M., et al. SUMO–simulation of urban mobility: an overview. in Proceedings of SIMUL 2011, The Third International Conference on Advances in System Simulation. 2011. ThinkMind.
    34. Li, G., Reduction of Fuel Consumption and Emissions and Simulation on SUMO. International Journal of Computer Science Issues (IJCSI), 2020. 17(5): p. 23-28.
    35. Zu, Y., C. Liu, and R. Dai, Distributed traffic speed control for improving vehicle throughput. IEEE Intelligent Transportation Systems Magazine, 2019. 11(3): p. 56-68.
    36. Sutton, R.S. and A.G. Barto, Reinforcement learning: An introduction. Robotica, 1999. 17(2): p. 229-235.
    37. Sutton, R.S. and A.G. Barto, Reinforcement learning: An introduction. 2018: MIT press.
    38. Wiering, M.A. Multi-agent reinforcement learning for traffic light control. in Machine Learning: Proceedings of the Seventeenth International Conference (ICML'2000). 2000.
    39. Jiang, Q., et al., Multi-agent reinforcement learning for traffic signal control through universal communication method. arXiv preprint arXiv:2204.12190, 2022.
    40. Zheng, G., et al. Learning phase competition for traffic signal control. in Proceedings of the 28th ACM international conference on information and knowledge management. 2019.
    41. Wang, T., J. Cao, and A. Hussain, Adaptive Traffic Signal Control for large-scale scenario with Cooperative Group-based Multi-agent reinforcement learning. Transportation research part C: emerging technologies, 2021. 125: p. 103046.
    42. Liu, D. and L. Li, A traffic light control method based on multi-agent deep reinforcement learning algorithm. Scientific Reports, 2023. 13(1): p. 9396.
    43. Chen, C., et al. Toward a thousand lights: Decentralized deep reinforcement learning for large-scale traffic signal control. in Proceedings of the AAAI Conference on Artificial Intelligence. 2020.
    44. Parambath, I.A.T., et al. Reinforcement learning solution to economic dispatch using pursuit algorithm. in 2011 IEEE GCC Conference and Exhibition (GCC). 2011.
    45. Watkins, C.J.C.H., Learning from delayed rewards. 1989.
    46. Watkins, C.J.C.H. and P. Dayan, Q-learning. Machine Learning, 1992. 8(3): p. 279-292.
    47. Chang, F., et al., Charging Control of an Electric Vehicle Battery Based on Reinforcement Learning. 2019. 1-63.
    48. Clifton, J. and E. Laber, Q-learning: Theory and applications. Annual Review of Statistics and Its Application, 2020. 7: p. 279-301.
    49. Guo, M., et al. A Reinforcement Learning Approach for Intelligent Traffic Signal Control at Urban Intersections. in 2019 IEEE Intelligent Transportation Systems Conference (ITSC). 2019.
    50. L. N. Alegre, SUMO-RL, https://github.com/LucasAlegre/sumo-rl, 2019.
    51. P. Michelucci and J. L. Dickinson, The power of crowds, Science, vol. 351, pp. 32-33, 2016.

    QR CODE