Every AI lab says AGI is close but they all define AGI differently, so what are they actually claiming?
I have been following AI coverage seriously for about two years. One thing that keeps bothering me is that every time a lab says AGI is near, a critic responds that the lab has redefined AGI to mean something easier to achieve. And honestly both sides seem to have a point.
OpenAI's definition seems to have shifted. DeepMind has its own framing. Anthropic barely uses the term. Independent researchers seem to use it differently again. So when someone says "we are close to AGI" what are they actually claiming? What would AGI need to be able to do that current systems demonstrably cannot? And is there any definition that the field broadly agrees on, or is this just a completely contested term?