The European Union has reached a preliminary agreement on its AI regulation, setting the stage for the economic bloc to ban certain uses of AI technology and demand transparency from providers. The AI Act, first proposed in 2021, has not yet been fully approved, and last-minute compromises have softened some of its strictest regulatory threats. Enforcement likely won’t begin for years, and the compromise is not expected to have immediate effects on established AI designers based in the US.
The act classifies its rules based on the level of risk an AI system poses to society. Some member states were concerned that the strictness of the rules could make the EU an unattractive market for AI. France, Germany, and Italy lobbied to water down restrictions on GP AI during negotiations, resulting in a two-tier system and law enforcement exceptions for prohibited uses of AI.
The AI Act does not introduce new laws around data collection but requires companies to follow GDPR guidelines and does not clarify how companies should treat copyrighted material that’s part of model training data. Additionally, the AI Act will not apply stiff fines to open-source developers, researchers, and smaller companies working further down the value chain.
Experts believe that while the AI Act is significant for AI governance, there is still a lot of work ahead. A large majority of EU countries have acknowledged that this is the direction they want to go, showing where the EU stands on AI.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…