Artificial intelligence (AI) has rapidly advanced, leading to concerns about its implications. In the short term, worries include the potential for toxic language and disinformation from chatbots, discriminatory automated decision-making, and lack of transparency in AI systems. Job displacement and the existential risk of creating super-intelligent AI systems are long-term concerns. Various countries, including the European Union (EU), China, Brazil, Canada, and the United States, are working on AI regulation, with the EU taking a more regulatory approach compared to the laissez-faire stance of the UK.
Interestingly, some technology companies themselves are advocating for regulation. OpenAI’s CEO, Sam Altman, believes AI regulation is necessary, while Google’s CEO, Sundar Pichai, emphasizes the need for global frameworks. However, not everyone agrees, with organizations like the Center for Data Innovation supporting the UK and India’s approach of using existing regulations to address AI-related issues.
In April 2021, the EU proposed the AI Act, which classifies AI applications based on risks and proposes bans and strict oversight for high-risk areas. The European Parliament recently passed a draft of the law, and negotiations for amendments are underway. The legislation is expected to take effect two years after reaching a final agreement.
China has already implemented regulations, starting with rules for recommendation algorithms in March 2022. They require transparency and opt-out options for citizens. In January 2023, the Chinese government issued initial rules for generative AI, followed by proposed draft rules in April 2023. The focus is on public-facing algorithms, addressing concerns about shaping societal views. The draft rules also require AI companies to verify data used for training, which poses challenges and potential costs.
In the United States, progress on AI regulation has been slower. Last year, a national law proposal made no headway, and the White House issued a nonbinding Blueprint for an AI Bill of Rights in October 2022, highlighting civil rights concerns related to algorithmic discrimination and privacy intrusion. Although federal agencies are establishing regulations within their domains, there is a consensus that legislative action is needed. Senate Majority Leader Chuck Schumer has circulated a draft framework for AI regulations.
The challenge for AI companies operating globally is to comply with various local regulations unless international agreements are reached. The G7 nations have begun discussing AI governance, and European officials have proposed a voluntary AI code of conduct. Time is of the essence due to the rapid development of AI technology.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…