News Corp Australia allegedly utilizes generative AI to produce 3,000 news articles every week! This practice is becoming increasingly prevalent among media corporations worldwide. Large language models like GPT-4 predict language, producing good guesses rather than producing facts. Therefore, ChatGPT is like an “automated mansplaining machine.” Despite promises of human oversight, grave concerns arise from mistaking AI-generated writing for journalism, because of its inaccuracy, misinformation, and poor content.
News Corp, with its extensive audience in Australia, warrants attention due to its utilization of AI. Although the AI-generated material appears to be limited to “service information” such as fuel prices and traffic updates, it provides a glimpse of a potential future direction. CNET, a tech news outlet, was exposed for publishing AI-generated articles filled with errors in January. Union strikes and protests among CNET workers and Hollywood writers highlight concerns about AI-generated writing. They call for improved protections and accountability regarding AI usage, prompting the question of whether Australian journalists should join their cause for AI regulation.
The growing use of generative AI is indicative of mainstream media organizations’ approach to monetize unregulated information. Their resistance to crucial reforms, protecting individuals’ privacy online, underscores their strategy. Traditional media outlets, facing declining profits in the digital economy, have begun to incorporate AI-generated news content, which simply exacerbates the issue rather than improving it. If the internet becomes dominated by AI-generated content, then the new models will be trained solely on old AI outputs. We may find ourselves in a situation akin to living within an accursed digital ouroboros: eternally devouring its own tail. To the same point, Jathan Sadowski coined the term: “Hapsburg AI” as a digital metaphor for the devastating inbreeding caused by that royal dynasty’s love of itself, resulting in grotesque features and genetic madness.
Research has shown that large language models like ChatGPT deteriorate in quality when trained on AI-generated data rather than original human-made content. Without new human data, a self-consuming loop is established, leading to a progressive decline in content quality. Some researchers think that AI will filling the internet with trivial repetitive content. Unfortunately, media organizations exacerbate this problem by employing AI to generate copious amounts of content. However, this extensive production of AI-generated content could inadvertently contribute to AI’s own demise.
Not all is doom and gloom; many AI applications can benefit the public, such as enhancing information accessibility by quickly transcribing audio content, generating image descriptions, and enabling text-to-speech delivery. These applications are genuinely promising. However, connecting a struggling media industry to generative AI is not good for Australia. Citizens in rural areas deserve authentic, local reporting, and Australian journalists require protection against the encroachment of AI on their job security. Australia needs a robust, sustainable, and diverse media landscape to hold those in power accountable and keep the public informed.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…