New evidence from the past week (e.g. Anthropic's "alignment faking" paper + OpenAI's new o3 model scoring 87% on ARC-AGI) gave me the courage to speak truth to podcast.
Share this post
Rebutting Sean Carroll on LLMs and AGI
Share this post
New evidence from the past week (e.g. Anthropic's "alignment faking" paper + OpenAI's new o3 model scoring 87% on ARC-AGI) gave me the courage to speak truth to podcast.