In the months to come, the creation of artificial intelligence (AI) laws in the United States may be driven more by the courts than by politicians. A significant aspect of AI policymaking is the low interest among American politicians to enact AI-specific laws. This is due both to the divided Congress and to the extensive lobbying efforts by technology companies, making it unlikely that such legislation will pass anytime soon. Some AI observers believe that the most publicized attempt at creating new AI regulations, Senator Chuck Schumer’s SAFE Innovation framework, lacks specific policy recommendations, and will be on the bench a longtime before becoming law.
As an alternative, others suggest applying current privacy and copyright laws in court as a more pragmatic approach. Existing laws have already provided grounds for individuals to claim violations by AI companies. Comedian and author Sarah Silverman filed a lawsuit against OpenAI and Meta, alleging that her copyrighted material was unlawfully scraped from the internet for use in training their AI models. Artists in a class action lawsuit have made similar claims, stating that popular image-generation AI software used their copyrighted images without permission. Microsoft, OpenAI, and GitHub’s AI-assisted programming tool, Copilot, are also facing a class action lawsuit for their alleged use of programming code scraped from websites, which is perceived by some as “software piracy on an unprecedented scale.”
Not only is copyright infringement a concern, but the Federal Trade Commission (FTC) is also investigating OpenAI’s data security and privacy practices. This investigation focuses on potential harm caused to consumers and their reputations during the training of AI models. Additionally, AI language models often produce inaccurate and defamatory content. Marc Rotenberg, president and founder of the Center for AI and Digital Policy, highlights the FTC’s authority to enforce industry standards, introduce improved business practices, and take companies to court.
Other government enforcement agencies are expected to initiate their own investigations into potentially illegal AI practices. The Consumer Financial Protection Bureau is examining AI chatbots in the banking sector. Furthermore, the Federal Election Commission may investigate the role of generative AI in the upcoming 2024 US presidential election.
The approach taken by the United States in regulating new technologies differs from that of other Western countries. While the European Union proactively aims to prevent harmful AI consequences, the US adopts a reactive stance, waiting for harms to occur before implementing regulations. This approach encourages innovation and allows creators and inventors the freedom to explore groundbreaking solutions..
The class action lawsuits related to copyright and privacy issues have the potential to shed light on the inner workings of black-box AI algorithms and establish mechanisms for compensating artists and authors whose content is used in AI models. With generative AI models heavily relying on vast datasets derived from the internet, copyrighted information inevitably becomes part of the equation. Authors, artists, and programmers argue that technology companies should provide compensation and seek consent when using their intellectual property. The outcome of these class action lawsuits will determine the validity of these opposing viewpoints.
As AI-related issues continue to arise, the need for legal expertise in addressing these matters is expected to increase significantly. MIT Technology Review experts predict that technology companies may face further litigation concerning privacy concerns, particularly related to biometric data like facial images and voice clips. Already. Prisma Labs is confronting a class action lawsuit regarding the collection of users’ biometric data for its Lensa AI avatar program. Additionally, the issue of AI product liability will become significant as companies are sued for malfunctions and misinformation generated by their AI models.
Litigation has proven effective in inducing social change in the past. There is every reason to believe that ongoing legal actions, along with improved understanding and documentation, will prompt technology companies to alter their practices when constructing and utilizing AI models.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…