Back to Projects

SmartSnake AI Demo

Watch a neural network learn to play Snake in real-time. The AI uses Deep Q-Learning with an 11-input, 256-hidden, 3-output architecture to learn optimal moves through experience replay and temporal difference learning.

Current Score
0
High Score
0
Games Played
0

Start training to see the learning curve

How it works

State: 11 inputs (3 danger sensors, 4 direction flags, 4 food direction flags)

Network: 11 → 256 → 3 (straight, right, left)

Learning: Deep Q-Learning with experience replay (100K memory, 1K batch)

Exploration: ε-greedy decay (40% → 0% over 80 games)