The rapid and widespread adoption of cutting-edge large language models has generated both excitement and concern about advanced artificial intelligence (AI). As a result, many are turning to the nascent field of “AI safety” for solutions. Major AI companies are said to be heavily investing in this emerging research program, and governments, like the UK, are beginning to invest in AI safety programs. So, why is AI safety essential?
A skeptic might think that AI safety is a disguised opportunity for AI companies to make money and be seen as saviors, rather than perpetrators, of these computerized harms. However, some groups are already calling for transparency, accountability, and genuine commitment from AI companies to ensure the development of safe AI systems.
Those groups realize that AI means more than a potential loss of jobs and economic inequality, it is the seeds of a possible weapon which could destroy humanity. But even if we managed to avert the annihilation of humankind, an AI system with such great destructive capability would still remain one of the most threatening technologies ever created. Thus, a much more comprehensive set of values, intentions and controls must guide its development.
Ensuring the safety of AI entails deep understanding and firm mitigation to future risks. Doing so effectively means pursuing sociotechnical approaches emphasizing that no single group of experts or policymakers– especially not technologists or politicians alone– should possess the unilateral authority to determine which risks are significant, which harms matter, and which values AI should adhere to for safety.
Addressing these complex issues requires urgent public discourse, involving debates and decisions surrounding the nature of AI safety, the risks involved, and even whether working to build “god-like” AI systems should be allowed, which is what autocratic organizations now dream of doing.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…