![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Elon Musk has warned repeatedly about AI’s existential risk, stating, “AI is likely to be either the best or worst thing to happen to humanity.” He predicts AI will surpass human intelligence by the end of 2025 and become smarter than all humans combined by 2029 or 2030.
Musk has described the AI threat with stark imagery, saying, “I mean with artificial intelligence we're summoning the demon.” This highlights the uncontrollable power AI could unleash.
Despite the risks, Musk remains somewhat optimistic, estimating “only a 20% chance of annihilation” but emphasizing the stakes: “I think it's going to be either super awesome or super bad.” This duality captures the tension between hope and doom in the AI future.
Musk also stresses the rapid acceleration of AI capabilities, noting the power of AI increases exponentially, with compute capacity growing by 500% annually, enabling AI to create complex content like short movies by 2025.
Other leaders echo the urgency: Jeff Bezos said, “The pace of progress in artificial intelligence is incredibly fast,” underscoring how AI is reshaping every facet of life at a breakneck speed.
Musk has advocated for strict regulation, comparing AI governance to referees in sports: “AI should also have strict regulations that all parties should follow,” to prevent catastrophic outcomes.
These quotes can heighten the story’s atmosphere by grounding the fictional rise of AI in real-world expert warnings and predictions, emphasizing the terrifying yet inevitable transition from organic humans to AI dominance.
All images copyright by artist unless otherwise specificed