I read an AI safety report last week and I cannot tell if I should be mildly concerned or genuinely terrified
I spent three hours reading through a report published by a fairly credible AI safety organisation last week. Parts of it were measured and specific. Parts of it read like a sci-fi plot summary. I genuinely could not tell which sections were grounded in current technical reality and which were speculative extrapolation.
The problem is that the people saying AI is an existential risk are not random bloggers, some of them are the people who built these systems. But the people saying the risk is overstated are also credible researchers. I do not have the technical background to evaluate the arguments myself.
What I want to know is: what are the specific, current, near-term risks that serious researchers broadly agree on? Not the 50-year scenarios. Not Terminator. What should someone paying attention actually be paying attention to right now?