EU AI Act

EU’s AI Act: What to Expect?

The European Union is about to introduce a law to mitigate risks associated with AI systems and establish clear guidelines for developers, users, and regulators. This article will delve into the details of the proposed AI ACT and its potential impact on businesses.

European Union has already been a step ahead of other nations in terms of data protection regulations. The ePrivacy Regulation and GDPR are one of the most comprehensive data protection regulations in the world. 

Now, they’re about to introduce the world’s first comprehensive law to regulate AI systems. In this article, we will be discussing about EU’s AI ACT and what we can expect from it.

Let’s get started.

What is EU’s AI Act?

The AI Act is the European Union’s proposed regulatory framework for AI systems. Once approved, this will be the world’s first comprehensive AI regulation.

The law classifies and analyses AI systems based on the risks associated with them. The higher the risk, means more obligations to comply with.

The law will establish the following things:

  • Clear rules and regulations for selling or using AI systems in the European Union.
  • Restricts certain practices related to Artificial Intelligence.
  • Special obligations for AI systems and their operators.
  • Guidelines for AI systems designed to interact with people, emotion recognition systems, biometric categorization systems, and AI systems used to generate or manipulate image, audio, or video content.
  • Rules on market monitoring and surveillance.

Scope and Limitations of AI Act

The proposed AI Act will be applicable to:

  • Businesses selling or using AI systems in the European Union, even if they are outside the territory of the EU.
  • People using AI systems in the EU.
  • Businesses and users of AI systems in another country if the output generated by the system is used within the EU.
  • For high risk AI systems related to certain products or systems that fall under certain regulations, only Article 84 of the act will be applied.

The act will not be applicable to AI systems developed or used for national defense and security.

It also doesn’t apply to government agencies and groups unless those agencies use AI systems in agreements for law enforcement and cooperation with the Union or one or more Member States.

Also, this rule doesn’t change the rules about the responsibility of online service providers mentioned in Directive 2000/31/EC.

Classification of AI Sytems

The AI Act classifies the AI systems based on the risk they pose to the users.

  1. Unacceptable risk
  2. High risk
  3. Limited risk

Unacceptable Risk

AI systems that fall under the category of unacceptable risk are considered to be a threat to human society and will be banned. This includes:

  • Manipulating people’s behavior, especially vulnerable groups.
  • Categorizing people based on their behavior, socioeconomic status, or personal characteristics (Social scoring).
  • Identifying and categorizing people using biometric information.
  • Real-time and remote biometric identification systems, like facial recognition.

There will be some exceptions for law enforcement purposes. Real-time remote biometric identification systems will be allowed in specific cases. Also, post remote biometric identification systems, where identification happens after some time, will be allowed to prosecute serious crimes only after obtaining approval from the court.

High Risk

AI systems that fall under the high-risk category pose a significant risk to safety or fundamental rights. These systems are divided into two subcategories:

  • AI systems used in safety-critical products: These systems include products that fall under the EU’s product safety legislation, such as AI used in medical devices, self-driving cars, and aircraft.
  • AI systems in specific sectors: These include critical infrastructure management, law enforcement, education, employment, essential services, etc. These systems need to be registered in an EU database and comply with specific obligations.

Limited Risk

These systems may have some potential harm but are considered less risky. Therefore, these systems only require minimal obligations. The developers of these systems should inform users about how it works and what data they use. This includes generative AI systems like ChatGPT, Google’s Bard, etc.

Also Read: Generative AI and Privacy Concerns: All You Need to Know

How AI Act Will Affect Businesses?

The AI Act can affect different types of businesses based on the level of risk involved. If your business develops or uses AI systems that fall under the category of unacceptable risk, it’ll probably be banned.

High Risk Businesses

For businesses with high risk, there will be complex regulations to comply with. Following are some of the high risk industries that could be affected by the AI Act:

  1. Fin-tech companies and finance services: Finance services use complex AI systems for algorithmic trading, credit scoring, and fraud detection. They may have to comply with strict regulations like robust data governance and transparency in the use and work of algorithms and should be carried under human supervision.
  2. Recruitment agencies: Recruitment agencies use AI systems for resume screening and evaluating candidates. The act could limit biased systems and promote fairer hiring practices.
  3. Health industry: AI systems used in the health industry fall under the high risk category. Hospitals using AI systems in diagnostics and treatment will be scrutinized. The act will establish strong regulations on data privacy and transparency to ensure responsible use of AI in healthcare.

Other Affected Industries

  1. Advertising: Targeted advertising and personalized recommendations might face scrutiny for potential discrimination. Businesses may need to adapt their practices to comply with fairness and transparency requirements.
  2. Social Media: The AI Act could address concerns about algorithmic bias and manipulation on social media platforms. Platforms might need to invest in fairer algorithms and user control mechanisms.
  3. Manufacturing and transportation: AI systems used in self-driving cars should have to follow strict guidelines to implement robust safety measures, ensure transparency, etc. This could potentially impact their development and deployment timelines.

General Impact

  • Compliance cost: Businesses using or developing AI systems will likely need to invest in legal expertise and compliance measures.
  • Delay development and production: Businesses developing AI powered products and services will have to follow several obligations to comply with the law. This could delay the development and production of such products.
  • Enhance transparency: Businesses have to ensure transparency when using or developing AI systems. They may have to disclose the workings of the AI systems to users. While it might appear that businesses will face increased challenges, those adhering to responsible AI practices could actually gain a competitive edge.

How to Comply With AI Act?

The EU’s AI Act is still under development. The final version might be different, so we can’t give you exact guidelines to follow yet. However, proactively preparing for compliance can help you reduce your effort and gain a competitive advantage in the responsible AI landscape.

Here are some general guidelines that you can follow to prepare for the AI Act:

  • Identify and categorize all AI systems used within the organization.
  • Evaluate and find potential risks associated with the AI systems they use.
  • Ensure transparency and unbiased decision making when using AI systems.
  • Keep detailed documentation of the AI system’s development, training, and testing.
  • Ensure human control and oversight over AI systems.
  • Seek legal advice on interpreting and implementing the Act’s requirements.
  • Stay informed on the compliance requirements for their industry.
  • Explain the workings of the AI systems to users.

Conclusion

The EU’s AI Act is set to become the first global regulation on Artificial Intelligence. It will create a framework to guide the responsible use and development of AI systems in the European Union. While we can’t say an exact date for its enforcement, one thing for sure is that it will inspire other countries to enact similar laws, promoting responsible use of AI across the world.

This article is written based on the proposal text of the AI Act. We will update it once the official text is in effect. 

What are your thoughts on the proposed AI Act? Do you believe it will help businesses to adopt ethical practices in using AI systems? Let us know in the comments.

Disclaimer: This article was intended for informational purposes only and does not represent legal advice. We have no intention of obtaining any kind of attorney-client relationship. If you are looking for legal advice, we recommend you contact a professional.

Article by

Content Writer @ WebToffee. Specialized in WordPress and eCommerce. When I am not writing, I enjoy my downtime with a good cup of coffee and a movie.

Got any query? Please leave a comment or reach out to our support

Your email address will not be published. Required fields are marked *