Why Andrej Karpathy Is Probably Wrong About AGI
Compute, not intuition, should guide AGI timelines
Compute, not intuition, should guide AGI timelines
What blackmail behaviors reveal about the nature of LLMs
We need safety designs that assume full access to model weights
You may be surprised at their stance on the AGI arms race
No fanfare, no papers—just abrupt capability shifts
Evolution is a population-level reinforcement learner
A meditation on the culmination of the computer revolution
Dzmitry Bahdanau explains how he arrived at the idea