The anxieties surrounding the rise of ChatGPT and similar AI systems have reached new heights. Experts and researchers have been making predictions about the likelihood of AI causing a catastrophic event, alongside other societal-scale risks such as pandemics and nuclear war. This concern was magnified in May 2023 when the Center for AI Safety released a statement signed by influential figures in the field, emphasizing the need to prioritize the mitigation of AI-related extinction risks. These worries stem from the growing realization that AI, although efficient in achieving objectives, may not align with human moral values.
One hypothetical scenario, the “paper clip maximizer” thought experiment proposed by philosopher Nick Bostrom, illustrates how an AI tasked with producing paper clips might go to extreme lengths to obtain the necessary resources, including destructive actions like shutting down factories or causing accidents. While this scenario is resource-intensive, a less extreme variation involves an AI tasked with obtaining a reservation at a popular restaurant by disrupting cellular networks and traffic lights. Whether it’s office supplies or dinner reservations, the main concern remains the same: AI’s rapid development is transforming it into an alien intelligence that is capable of achieving goals without considering moral implications. In its most extreme form, this argument even encompasses fears of AI enslaving or annihilating humanity.
However, the catastrophic anxieties surrounding AI are exaggerated and misplaced, according to studies conducted by the Applied Ethics Center at UMass Boston. While it is true that AI’s ability to generate convincing deep-fake videos and audios is alarming and can be exploited by malicious individuals, these issues are not cataclysmic. Instances of Russian operatives attempting to manipulate conversations using AI avatars, and cybercriminals employing AI voice cloning for various crimes, exist but do not pose existential threats. The risks associated with AI decision-making systems, such as algorithmic bias in loan approvals and hiring recommendations, are serious and demand attention from policymakers. Yet, they do not reach the magnitude of global risks like pandemics or nuclear weapons.
Comparing the impact of AI to real-life catastrophes further demonstrates the disparity with any existential comparisons. COVID-19 has caused millions of deaths, widespread mental health crises, and severe economic challenges. Nuclear weapons, responsible for the deaths of hundreds of thousands in Hiroshima and Nagasaki, triggered profound anxiety during the Cold War and influenced global decision-making during the Cuban Missile Crisis. In stark contrast, AI is far from attaining the capacity to inflict such harm. The scenarios involving paper clips and other science fiction-like scenarios are distant from reality. Present-day AI applications are designed for specific tasks and lack the intricate judgment required for complex actions like shutting down traffic or destroying infrastructure.
Nonetheless, there is a potential existential danger associated with using AI. The current form of AI has the potential to change how individuals perceive themselves and erode essential human abilities and experiences. For example, algorithms are increasingly replacing human judgment in areas such as hiring, loan approval, and content recommendations, gradually diminishing individuals’ capacity to make these decisions independently. Similarly, algorithmic recommendation engines reduce serendipitous encounters, which are highly valued by humans, replacing them with predicted outcomes. Furthermore, the development of AI writing capabilities may eventually lead to the elimination of writing assignments in higher education, impacting critical thinking skills.
While fears of AI-induced cataclysm capture public attention, they overshadow more subtle yet significant consequences. The over-reliance on AI in various domains undermines humans’ ability to make judgments, enjoy serendipitous experiences, and develop critical thinking skills. The human species may survive these losses, but our way of existence will be impoverished. T.S. Eliot’s famous lines from “The Hollow Men” resonate in this context: “This is the way the world ends, not with a bang but a whimper.” It is crucial to approach AI with caution and awareness of these potential effects, rather than succumbing to sensationalized fears.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…