Hate speech, political propaganda, and false information are recurring issues online, especially during election years. Automated social media accounts have made it easier to spread lies, but now generative AI is making that even more convincing. A new study published in PNAS Nexus predicts that AI will increasingly spread toxic content across social media platforms on a near-daily basis in 2024, potentially affecting election results in more than 50 countries. Extremist groups that post hate speech tend to survive longer on smaller platforms, but their reposted messages can reach a wider audience on larger platforms. In addition, generative AI is lowering the cost for producing misinformation, allowing more multimedia falsehoods that are evermore convincing to be seen and heard.
AI-generated images or videos are a easier to detect than text, but identifying AI-generated texts remains a significant challenge. AI detection programs cannot adequately identify AI-generated content, and bad actors are likely to abuse publicly available AI tools. AI-generated disinformation is expected to spread at pandemic rates due to the legalities of social media platforms. Identifying the people who create false content may be a more practical approach than going after the content itself, as there are fewer bad actors than bad content in certain corners of the Internet. Effectively moderating content within small social media communities might be more effective than sweeping policies that ban entire categories of content. Despite concerns about AI’s impact on elections, the actual causes, effects and preventions need more research. There is a fear that overestimating AI’s potential harms may erode trust in other truthful information centers on the Internet
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…