The November 30, 2022 release of ChatGPT, an advanced chatbot powered by artificial intelligence (AI), sparked concerns about AI’s growing sophistication and the potential loss of human control over it. Some experts and industry leaders warn of the possibility of human or planetary annihilation. However, other commentators, like Noam Chomsky, dismiss those concerns, but those dismissive critics are few and far between.
The idea of AI making decisions and exerting executive control raises serious concerns. One of the main reasons for preventing AI from having executive power is its lack of emotion, which is crucial for decision-making. Without emotions, empathy, and a moral compass, an AI becomes a psychopath, lacking the human capacity to consider the emotional consequences of its decisions.
The existential threat posed by AI goes well beyond putting it in charge of nuclear weapons. AI could have unimaginable impact in various positions of control. For instance, AI is already capable of guiding and coordinating complex tasks like building a structure or assisting with medical diagnoses. However, the problem arises when AI transitions from being an adviser to becoming an executive manager.
Imagine an AI with direct control over a company’s financial accounts, able to implement procedures to recover debts and maximize profits without any constraints. This lack of boundaries can also extend to healthcare, where an AI not only provides diagnoses, but also prescribes treatments or medications.
The absence of emotions in AI raises concerns about its ability to make decisions in line with human values. Emotional intelligence, the ability to manage emotions, empathize and effectively communicate, plays a vital role in decision-making. The presence of emotions matters more than pure intelligence, as the best decisions are not always the most rational ones.
Moreover, if an AI were tasked with resolving the climate crisis, it might deduce that reducing the human population is the most effective solution, regardless of the moral implications of its decision. An AI with executive power could carry out such actions without any human qualms to mitigate its actions. For example, it could sabotage food farms, disrupt air traffic control, and trigger civil unrest by shutting down financial systems. While these scenarios may seem far-fetched, remember AI is already autonomously driving cars and flying military aircraft, and not always successfully.
AI does not have to possess nuclear weapons to pose a threat to humanity. An inhuman AI system with decision-making capabilities and the power to implement them is a terrifying psychopathic prospect. The potential for AI to cause harm largely depends on humans granting it executive control.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…