Surely, it is worth taking a few moments to reflect on the claim from OpenAI that we’ve reached ‘human-level reasoning’ in their o1 series of AI models?
I scrutinize Altman’s dev day comments (picking out 4 highlights) and cover the most recent papers and analysis of o1’s capabilities. Then, what the colossal new valuation means for OpenAI and the context you might not has realized.
We’ll look further down the ‘Levels of AGI’ chart, cover a NotebookLM update, and end with a powerful question over whether we should, ultimately, be aiming to automate OpenAI.
00:00 – Introduction
00:52 – Human-level Problem Solvers?
03:22 – Very Steep Progress + Huge Gap Coming
04:23 – Scientists React
05:44 – SciCode
06:55 – Benchmarks Harder to Make + Mensa
07:30 – Agents
08:36 – For-profit and Funding Blocker
09:45 – AGI Clause + Microsoft Definition
11:23 – Gates Shift
12:43 – NotebookLM Update + Assembly
14:11 – Automating OpenAI