Major media organizations want more transparency in the training and use of generative AI models. They have published an open letter to policymakers, requesting their involvement in creating standards for the use of artificial intelligence, particularly in the area of intellectual property rights. The letter expresses concern that the irresponsible use of AI technology could threaten the credibility and quality of media content, leading to a loss of public trust.
The authors of the letter want the responsible development of generative AI by using a legal framework to establish protections for data and to maintain trust in the media. The letter also outlines priorities for regulating AI, including transparency in training sets, obtaining consent from rights holders, and negotiation between media groups and AI companies.
There have been a number of instances of copyright infringement lawsuits filed against AI developers by media companies and artists. However, some collaboration between AI developers and media organizations has also occurred, as seen in the agreement between OpenAI and The Associated Press. The letter also calls for clear identification of AI-generated products, and for effective measures to address bias and misinformation.
The unchecked use of generative AI can have far-reaching negative implications, such as fake reviews, disinformation, surveillance, discrimination, job losses, and potential government and societal disruptions. The European Publishers’ Council and various international media organizations, including Agence France-Presse, European Pressphoto Agency, Gannett | USA TODAY Network, Getty Images, National Press Photographers Association, National Writers Union, News Media Alliance, The Associated Press, and the Authors Guild have already signed the letter.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…