Preliminary legislation, widely adopted by the European Parliament, offers a glimpse into the future of artificial intelligence (AI) governance. The AI Act (AIA) aims to protect fundamental rights and ensure the ethical progress of AI in Europe and beyond. It is a highly ambitious framework that guides the development and utilization of AI. The vote reflects the growing demand from researchers across disciplines for regulations to govern powerful AI. While the final form of the AIA will emerge from discussions with the European Council and the Commission, the decision by Europe’s influential legislative body provides an opportune moment for the AI research community to prepare for the forthcoming impact that will resonate globally.
With numerous countries seeking to regulate AI, the AIA is hailed as the world’s first comprehensive law that governs AI across various applications and contexts. While it is welcomed by many, it also poses challenges due to its extensive length, nuanced obligations associated with complex risk categories, and reliance on ongoing development of technical standards. Critics argue that its density and compliance costs may impede innovation, particularly for startups, while proponents assert that rules promoting trustworthy AI will drive greater innovation.
Undoubtedly, the AIA will have profound effects on communities involved in applied AI research, both within Europe and worldwide. Researchers engaged in AI development will directly encounter the intricacies of the AIA. Although it does not apply to research and development prior to market introduction, compliance is triggered during real-world testing. For AI researchers in fields like life sciences, employment, or education, familiarity with AIA requirements is crucial for accessing the European Union (EU) market. High-risk systems, as defined by the AIA, necessitate adherence to rigorous premarket obligations, encompassing risk management, various control measures, and transparency requirements. Providers of foundation models, including generative models, must fulfill separate obligations.
Recognizing the need to balance risk regulation and research promotion, the AIA incorporates measures to facilitate socially and environmentally beneficial AI development. These measures include research funding and priority access to AI regulatory sandboxes, which provide innovation-friendly “test beds” for AI experimentation by leveraging regulatory flexibility. Furthermore, the regulation exempts free and open-source components unless they are part of a high-risk AI system introduced to the EU market by a provider.
The AIA also impacts researchers working with high-risk AI systems within organizations, such as hospitals and financial institutions, for professional purposes within the EU. Deployers of such systems bear the responsibility of utilizing relevant and unbiased input data. For example, a team of medical researchers deploying a customized AI in personalized medicine must ensure that the training data is devoid of racial, social, or religious biases. Requirements for human oversight and record-keeping can influence the operational practices of researchers utilizing high-risk AI systems.
The AIA also holds implications for scholars investigating the broad-ranging impacts of AI. Transparency requirements, such as labeling AI-generated content, will facilitate empirical research and the development of technical and policy tools to address governance challenges, including deepfakes. However, it is unfortunate that the European legislature has yet to incorporate rules that grant researchers tiered access to data, parameters, and models, especially for foundation models. Such an access system could promote accountability and enable comprehensive research on social impact, as demonstrated by a similar provision in the EU Digital Services Act.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…