The same AI company that says safety is their top priority just released a more powerful model, am I missing something?
I follow the AI industry fairly closely. Something keeps bothering me and I want to see if others find it as contradictory as I do.
Several of the major AI labs have published detailed safety commitments. They talk about responsible development, about not releasing models until they are safe, about the existential risks of getting this wrong. These are not PR statements, they are long technical documents written by serious researchers.
And then the same labs release increasingly powerful models every few months in direct competition with each other. The safety teams at some of these companies have had very public internal conflicts. Senior researchers have resigned specifically over safety concerns.
How do I reconcile the stated commitment to safety with the apparent inability to slow down? Is this hypocrisy, genuine complexity, competitive pressure they cannot escape, or something else? I am trying to understand it charitably but it is getting harder.