The author, Emma Brockes, expresses her growing concern about AI, sparked by a recent article in The New Yorker. She reflects on how her initial worries were localized, focusing on personal income and job market implications for her children's future. However, after reading the article, she feels compelled to reconsider the broader implications of AI, particularly the potential dangers associated with its development and use.
Brockes highlights the cult-like leadership of Sam Altman and OpenAI, drawing a parallel to cults and their blind devotion to their leaders. She mentions the alignment problem, where AI could outmaneuver human engineers and potentially control critical infrastructure. The author also discusses Altman's past statements about AI's potential to wipe out humanity, contrasting them with his current portrayal of AI as a portal to utopia.
The article raises concerns about the gap between personal AI use and its potential misuse by governments, military regimes, or rogue actors. Brockes argues that the greatest danger lies in the failure of imagination, where people underestimate the capabilities and potential consequences of AI. She provides an example of her interaction with ChatGPT, where the chatbot dismisses her concerns about becoming a member of the permanent underclass, lacking any sense of threat.
Brockes concludes by emphasizing the need for AI oversight and for voters to prioritize this issue in elections. She suggests that the public's lack of imagination about AI's potential dangers is a significant concern, and she calls for a more critical and proactive approach to addressing the challenges posed by artificial intelligence.