T-rex run or dinausaur game is an endless runner game that everyone known as “easter egg” when your connection down. This game is simple infinite runner, which you either jump or duck to avoid obstacle. The controls are really simple but the idea of putting this kind of “kill time” game is really intriguing. This game is perfect environment to implement an AI with Deep Reinforcement Learning method. Let’s dive into my approach on it.
Environment and AI
I used the game directly from browser game as environment rather than i make the game myself. I use pyscreenshot to capture the game and pyautogui to input actions to the game.The actions that agent can do is long jump, duck, and do nothing. To implement DQN we should determine reward and punishment, here is what i do:
- Get punished if “GAME OVER” logo detected.
- Increase reward in each action
Before feeding to the brain, i would like to preprocess the screen with edge detection to reduce unnecessary feature.
Edge detector using opencv
I fed the processed image into CNN with the output represent the action. I use experience replay algorithm so the agent can learn the environment better.
Result
Here is the result after 6 hour training
Average executed action per run
Requirement
- opencv 3.4.3.18
- pyautogui 0.9.38
- pyscreenshot 0.4.2
- pytorch 0.4.1
How to run
Step 1 : Open t-rex run game
Step 2 : Change bbox
on t_rex_env.py
to fit your game window
Step 3 : Run t_rex_env.py