Artificial intelligence (AI) tools like ChatGPT have the potential to greatly enhance consumer harms, such as fraud and scams. The US government possesses significant power to combat these AI-driven harms through existing legislation, according to members of the Federal Trade Commission (FTC).
During a session with House lawmakers, FTC chair Lina Khan expressed serious concern about the potential for AI tools to amplify fraud and scams.
Recently, there has been a surge in attention towards AI tools capable of generating convincing content, including emails, stories, essays, images, audio, and videos. While these tools offer transformative possibilities for productivity and creativity, there are concerns regarding their misuse for impersonation and deception.
Despite ongoing discussions among policymakers about the need for specific regulations addressing AI-related issues such as algorithmic discrimination and privacy, companies can still face FTC investigations under existing laws. Khan and her fellow commissioners emphasized the agency’s responsibility to adapt enforcement to evolving technologies without being deterred by the notion of AI as a revolutionary concept.
FTC Commissioner Alvaro Bedoya emphasized that companies cannot evade accountability by treating their algorithms as impenetrable black boxes. He reiterated that existing laws, including those related to unfair and deceptive practices, civil rights, fair credit, and the Equal Credit Opportunity Act, are applicable, and companies must adhere to them.
The FTC has previously offered comprehensive guidance to AI companies, and recently received a request to investigate OpenAI based on allegations that the creators of ChatGPT misled consumers about its capabilities and limitations.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…