Learning in Robot Football
Abstract
RoboCup has been celebrating for more than ten years with an ambitious initial aim, i.e. "by the year 2050, develop a team of fully autonomous humanoid robots that can win against the human world soccer champion team''. The current progress of individual learning for playing football has gained some positive results but is still far away from a human player. Soccer robots are still clumsy and inefficient. Improvements on the efficiency of individual robots, hence, need to be learned widely. This is also the aim of this project.
Key takeaways
AI
AI
- RoboCup aims for autonomous robots to outperform human soccer champions by 2050.
- Key skills to improve include dribbling, passing, and shooting through AI implementation.
- Q-learning, a reinforcement learning method, is utilized to enhance robot performance.
- Proposed rewards prioritize reaching the ball, proximity to the opponent's goal, and shooting angles.
- Research timeline estimates 12 weeks for implementing improvements before RoboCup 2009 competition.
References (4)
- J. Yao, J.Chen, Y. Cai and S. Li: An application in RoboCup combining Q-learning with Adversarial Planning, 4th World Congress on Intelligent Control and Automation (WCICA) (2002).
- R. D. Boer and J. Kok: The incremental development of a synthetic Multi-Agent System: The UVA Trilearn 2001 Robotic soccer simulation team. Faculty of Sci- ence, University of Amsterdam, 2002.
- J. Ma, M. Li, G. Qiu and Z. Zhang: Q-Learning in RoboCup Individual Skills. China National Sympo- sium on RoboCup, 2005.
- J. Ma: PRS Transfer report: Learning and Coopera- tion in Multiagent Systems. Oxford University Com- puting Lab.