Have You Been Influenced? Exploring The Implications of The EU Artificial Intelligence Act

December 18, 2023 | What's Hot | 4 min read

Like it or not, humans interact with artificial intelligence (AI) daily. From the recommendation engine in your favorite video streaming service to product recommendations from your preferred e-commerce site, your behavior and preferences have probably been influenced – in some ways, at least – by AI.

If it hasn’t, the algorithm is not doing what it should.

That’s why the new regulations that came out this month from the European Union (EU) are so important. The Artificial Intelligence Act is a significant proposal for rules governing AI systems in the EU. It aims to encourage the safe and trustworthy development and use of AI across the EU – regulating the technology based on its potential to cause harm, with stricter rules for higher-risk applications.

This proposal is groundbreaking globally and could set a standard for AI regulation in much the same way that the General Data Protection Regulation (GDPR) did for data protection. The world is watching the impact of the regulation and taking notes.

The Higher the Risk, the Stricter the Rules

At a fundamental level, the EU AI Act asks that we become more thoughtful as we consider how AI infiltrates our consciousness. Does it provide us with the information we need to make a better decision, or does it fully manipulate our preferences to achieve its own ends?

In the same way that we should not be using an individual’s personal information without their consent, per GDPR, we also cannot use AI to take away a person’s autonomy or manipulate their preferences without their consent. The AI Act is meant to help us, as a society, to identify and avoid this type of manipulation.

Here are some key points from the provisional agreement:

  • High-Risk AI: The agreement introduces rules for high-impact AI models that could pose systemic risks. It also establishes a revised governance system with enforcement powers at the EU level.
  • Prohibitions and Safeguards: Certain uses of AI, such as cognitive behavioral manipulation or untargeted scraping of facial images, are prohibited. The agreement includes safeguards for law enforcement exceptions, ensuring fundamental rights are protected.
  • Classification of AI Systems: The agreement categorizes AI systems based on risk. Systems with limited risk have lighter transparency obligations, while high-risk systems have specific requirements for market access.
  • Responsibilities and Roles: Given the complex nature of AI development, the agreement clarifies the responsibilities of various actors in the value chain, including providers and users. It also aligns with existing legislation, such as data protection laws.
  • Governance Architecture: A new AI Office will oversee advanced AI models, while a scientific panel and AI Board will provide expertise and coordination. An advisory forum includes stakeholders like industry representatives and civil society.
  • Penalties: Violations of the AI Act can result in fines based on a percentage of the company’s global annual turnover. Proportionate caps on penalties are specified for SMEs and start-ups.
  • Complaints and Transparency: Individuals or entities can complain to market surveillance authorities about non-compliance. The agreement emphasizes fundamental rights impact assessments and increased transparency for high-risk AI systems.
  • Innovation Support: Provisions support innovation-friendly measures, including AI regulatory sandboxes for testing innovative systems in controlled and real-world conditions.
  • Entry into Force: The AI Act will apply two years after it entered into force, with some exceptions for specific provisions.

Simply put, this agreement aims to regulate AI in the EU, considering its potential risks. It introduces rules, safeguards, and penalties to ensure responsible AI development and use while fostering innovation.

Subscribe to the Skillsoft Blog

We will email when we make a new post in your interest area.

Select which topics to subscribe to:

Interpretability and Discrimination in AI

So, what types of AI applications might pose systemic risks? Interpretability and discrimination immediately come to mind as high-risk categories we can use to refer to AI vulnerabilities such as hallucinations, bias, and more.

Interpretability is understanding how an AI system arrives at a particular decision or recommendation. If AI systems operate as “black boxes” with opaque decision-making processes, holding them accountable for their actions becomes challenging.

On the other hand, lack of interpretability erodes trust in AI systems, especially in high-stakes applications such as healthcare, finance, and criminal justice. Understanding the rationale behind AI decisions is crucial for ensuring compliance with laws and regulations and addressing privacy, fairness, and accountability concerns.

Discrimination in AI refers to biased outcomes that disproportionately impact specific groups of people. This can result from partial training data or inherent biases in algorithms.

Discrimination in AI applications can have profound ethical implications, leading to unfair treatment and exacerbating existing societal disparities. AI systems with discriminatory outcomes can perpetuate and amplify existing social inequalities. For example, biased hiring algorithms may reinforce gender or racial disparities in the workplace.

Be Proactive: Establish Guardrails Around AI at Your Organization

The Artificial Intelligence Act is a decisive step in challenging how society thinks about and interacts with AI. As human beings, we are fortunate to be at the forefront of this technology – and we can contribute to the narrative as it unfolds.

But, coming to a global consensus may take time, despite progressive legislation coming from the EU. What steps can we take in the meantime?

One of the most impactful ways your organization can contribute to the conversation around AI in the near term is to establish an AI policy for its employees. A formal AI policy defines how employees can use AI within your organization and typically covers ethical usage, bias and fairness standards, compliance requirements, and other critical guidelines and guardrails.

As AI best practices evolve, so will your AI policy.