The AI Act is a groundbreaking bill that will address potential harm caused by the use of AI in critical areas, such as healthcare, education, border surveillance, and public services, as well as prohibiting uses that present an “unacceptable risk.” The vast majority of AI uses will not be affected, but “high risk” AI systems will need to follow strict rules for risk mitigation, high-quality data sets, better documentation, and human oversight.
The AI Act will introduce legally binding rules to ensure transparency and ethics in technology companies, requiring them to notify people when they are interacting with AI systems, label deepfakes and AI-generated content, and conduct impact assessments on how AI systems will affect people’s fundamental rights. However, companies still have flexibility in determining which rules apply to them, based on the power needed to train their AI models.
The EU will establish a European AI Office to enforce the AI Act and impose fines for noncompliance. The regulations are expected to set a global standard and will require companies to comply if they want to do business in the EU. However, there are exceptions for AI systems used exclusively for military and defense purposes. The final wording of the bill is still pending approval from European countries and the EU Parliament, after which tech companies will have two years to implement the rules.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…