GPT 4 can self-correct and improve itself. With exclusive discussions with the lead author of the Reflexions paper, I show how significant this will be across a variety of tasks, and how you can benefit.
I go on to lay out an accelerating trend of self-improvement and tool use, laid out by Karpathy, and cover papers such as Dera, Language Models Can Solve Computer Tasks and TaskMatrix, all released in the last few days.
I also showcase HuggingGPT, a model that harnesses Hugging Face and which I argue could be as significant a breakthrough as Reflexions.
I show examples of multi-model use, and even how it might soon be applied to text-to-video and CGI editing (guest-starring Wonder Studio).
I discuss how language models are now generating their own data and feedback, needing far fewer human expert demonstrations.
Ilya Sutskever weighs in, and I end by discussing how AI is even improving its own hardware and facilitating commercial pressure that has driven Google to upgrade Bard using PaLM.