Serious Wake-Up Call on AI Safety & Ethics: "AI Could End Humanity" – Dr. Roman Yampolskiy on Diary of a CEO
Hi All,
On the important questions we should all be asking about AI, I just watched this powerful (and quite sobering) interview that dives deep into **AI safety and ethics**.
**Video:**\
**The AI Safety Expert: These Are The Only 5 Jobs That Will Remain In 2030!**\
Dr. Roman Yampolskiy on The Diary Of A CEO
**Length:** \~1 hour 25 minutes\
Dr. Roman Yampolskiy is one of the earliest and most respected voices in AI safety — he literally coined the term “AI safety” back in 2010. In this conversation with Steven Bartlett, he doesn’t hold back:
- Why superintelligent AI could be more dangerous than nuclear weapons
- How we currently have **no reliable way** to control or align advanced AI (it’s mostly a black box)
- The very real risk of AI enabling catastrophic events (engineered viruses, loss of control, etc.)
- Why the race for AGI is moving way faster than safety research
- His strong criticism of companies like OpenAI and the current “move fast and break things” approach
- What happens to society if 99% of jobs disappear in the coming years
- Ethical questions around consent, responsibility, and whether we should even be building superintelligence at all
He makes a compelling case that **AI safety and ethics are not side issues** — they are the most important topics right now. One of his strongest points: we’re running an uncontrolled global experiment on 8 billion people without their consent.
This video directly ties into several of the core questions we talked about earlier:
- How do we make sure AI is safe and aligned with human values?
- Who is accountable when things go wrong?
- Should we slow down or pause certain types of AI development?
- What ethical frameworks should guide AI companies and researchers?
It’s definitely on the more pessimistic side, but it’s backed by someone who has studied these risks for over 15 years. Even if you don’t agree with everything, it forces you to think seriously about the downsides.
Have you watched it yet?\
What’s your take on AI safety? Urgent problem we’re ignoring?\
Do you think we need stronger regulations, international treaties, or even a temporary pause on frontier AI development?\
And how does this change the way you approach using AI tools day-to-day?
Would love to hear thoughtful opinions, let’s keep the discussion respectful and constructive.
Looking forward to your thoughts! ⚠️