Pointers at Glance
- AI is becoming more common in our daily lives, but no clear regulations guide its design in the US, leaving companies to rely on their own ideas of right and wrong.
- The EU AI Act is about to become a global standard for AI regulation as regulations move from general frameworks to permanent laws.
Artificial intelligence (AI) is becoming increasingly ubiquitous in our lives. Currently, no clear regulations or laws guide AI design in the United States. This has led to business organizations developing their perceptions of right and wrong regarding AI. But that is about to change. The European Union (EU) is finalizing the EU AI Act. As generative AI rapidly evolves, the regulatory landscape for AI is set to shift from general, suggested frameworks to more permanent laws.
The EU AI Act is expected to become a global standard for AI regulation, just as the EU’s General Data Protection Regulation (GDPR) became a global standard for data privacy regulation when it was released in 2018. The EU parliament is planned to vote on the AI Act draft by the end of March 2023, and if this timeline is met, the final EU AI Act could be adopted by the end of 2023.
Organizations operating worldwide will be required to conform to the legislation. This is already being seen with similar legislation, such as Canada’s Artificial Intelligence & Data Act proposal and New York City’s automated employment regulation.
Organizations’ AI systems will be classified into three risk categories under the EU AI Act:
- Unacceptable risk
- High risk
- Limited and minimal risk
Each category will have its own set of guidelines and consequences.
For organizations with high-risk AI systems, the AI Act has already outlined numerous requirements, including:
- Implementation of a risk-management system
- Data governance and management
- Technical documentation
- Transparency, and provision of information to users
- Human oversight
- Conformity assessment
- Registration with the EU-member-state government
- Post-market monitoring system
We can also expect regular reliability testing for models to become more widespread in the AI industry, similar to e-checks for cars.
Businesses Need To Prepare For Strict AI Regulations
Business organizations should prepare for stricter AI regulation by researching and educating their teams on the types of regulation that will exist, auditing existing and planned models, developing and adopting a framework for designing responsible AI solutions, thinking through AI risk mitigation strategies, and establishing an AI governance and reporting strategy.
The EU AI Act is still under draft, and its global effects are yet to be determined, but one thing is clear: ethical and fair AI design is no longer a “nice to have” but a “must have”. Companies that prioritize trust and risk mitigation when designing and developing AI models will be more successful in the future.