trex-DQN

What if the AI learns to play the game based solely on visual inputs, like humans do?

Most RL projects rely on internal game variables (like player position or object distance), but in the real world, that data isn't always available. This project started from a simple curiosity:

Can an AI learn just from visual input, like humans do?

🌟Design Rationale

🎯Reward Mechanism

  • Penalty on game over: encourages survival.
  • Small positive reward per frame: promotes long-term performance.

🖼️Visual Preprocessing

  • Edge detection: strips out background clutter.
  • Removal unnecessary element (score, cloud, etc) to further emphasis essential element.

🧠 AI Model & Training

  • CNN: ideal for extracting spatial patterns from image input.
  • Experience Replay: enables stable training from diverse past experiences.

👉 Check out the full code and setup on GitHub

AI In Action.