This week, OpenAI, Anthropic, Google, Inflection, Microsoft, Meta, and Amazon have voluntary commitments to President Biden to observe AI safety, privacy and transparency goals. These commitments encompass security tests, information sharing, and investments in cybersecurity. OpenAI even drafted a policy memo advocating for government licenses for AI systems. However, this approach may lead to conflicts with startups and open source developers. It is concerning that the AI industry is trying to appease regulators while shaping policies to their advantage. Policymakers must step in to ensure adequate safeguards without interference from the private sector.
This week has been a whirlwind of AI news. OpenAI made personnel changes, and introduced customized instructions for ChatGPT. Google made advancements in a news-writing AI, while Apple is developing a chatbot similar to ChatGPT. Meta released its Llama 2 AI models, and there has been opposition from authors against generative AI. Microsoft introduced Bing Chat Enterprise, a chatbot focused on businesses that prioritizes data privacy. In research news, Fable Studios showcased an AI model capable of writing, directing, acting in, and editing an entire TV show. This has generated concern and debate about AI’s role in media production. Additionally, policy amendments pertaining to AI-generated content detection and Disney Research’s work on mapping virtual movements to robots were drafted. In addition, AI has been utilized to predict the locations of valuable minerals worldwide.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…