GDPR: The Forefront of Ethical AI
Last year, Skillsoft General Manager, Global Compliance Solutions, Kevin Kelly took some time to reflect on General Data Protection Regulation (GDPR). He said that “GDPR has prompted significant improvements in the governance, monitoring, awareness, and strategic decision-making regarding the use of consumer data. Not only that, but GDPR legislation has pushed the topic of data privacy to the forefront.”
Then he asked: “Has that been enough to drive meaningful change in data protection?”
We’ve Come a Long Way in Five Years – But We’re Not There Yet
As we mark the five-year anniversary of GDPR, I took the opportunity to chat with Jonathan Armstrong, Partner at Cordery Legal Compliance, a UK-based law firm focused on helping businesses manage an ever-increasing compliance burden. Armstrong has handled hundreds of GDPR matters across the EU. He believes that GDPR has demonstrated mixed results. On the one hand, he said that it has not done many of the things it promised, including streamlining and unifying how businesses handle data.
However, he believes that GDPR has accomplished some remarkable things initially outside its intended scope. Importantly, it has persevered through a period of massive innovation. According to Armstrong, “GDPR has been able to cope with rapid technological change, including the growth of artificial intelligence.”
As a result of GDPR, regulators have collected more than €80 million in AI-related fines alone. In fact, many organizations are now considering best practices for making AI GDPR compliant. Italy, for example, recently became the first Western country to block advanced chatbot ChatGPT while regulators investigate whether it complies with GDPR. Although the suspension has now been lifted after high-level talks with ChatGPT’s owners, the investigation continues
Subscribe to the Skillsoft Blog
We will email when we make a new post in your interest area.
Thanks for signing up!
Focusing Our Efforts on Ethical AI
In looking at GDPR fines levied over the past five years, organizations can learn a lot about where to focus our efforts for maximum impact. Keeping data secure is an essential objective of GDPR, but the past five years have taught us that a foundational part of compliance is a drive to do the right thing.
“Before GDPR went into effect, many companies assumed that most cases would relate to breaches in security,” said Armstrong. “However, it’s been interesting to note that most of the higher fines have been about transparency – about simply being honest with people about how you use their data.”
“Artificial Intelligence (AI), in particular, has been a popular topic of discussion,” said Armstrong. “As more companies develop AI-powered tools and services, we hear that ‘AI is the Wild West’ and ‘there is no regulation around AI.’ But, this couldn’t be further from the truth.”
AI is regulated through GDPR as it pertains to personal data.
GDPR requires organizations to let individuals know what information is being held about them and how it is used. That means that when any kind of automated decision-making takes place, organizations are obligated to provide affected individuals with information about the associated logic of those decisions, including:
- Alerting them to the fact that there is an automated decision being made;
- Educating them on the significance of the automated decision; and
- Sharing specific logic about how the algorithm works as it makes automated decisions.
If organizations relying on AI can prove they adhere to the above requirements, they are likely on the right track. But they’ll have to look at other aspects, including fairness, transparency, and putting measures in place to deal with individual requests.
Transparency Is Key In GDPR, and Especially AI
Armstrong provided some fascinating examples of the types of cases he is seeing around AI and GDPR in court:
Italy’s data protection authority, Garante, fined two of the country’s largest online food delivery apps for using algorithms to favor delivery drivers who could work during hours of high demand – especially Fridays, Saturdays, and Sundays. Workers unable to work on these days due to religious observance of the sabbath (for example) were penalized by the algorithms.
GDPR prohibits automated decision-making, including profiling.
Replika is an AI-powered chatbot simulating virtual friendships with users through text and video. The chatbot had no age verification, and regulators from Garante were concerned about the potential of sharing sexually inappropriate content with minors.
Garante ultimately said that Replika processed data unlawfully because children cannot enter a valid user contract. This fell under GDPR’s requirement for data transparency.
“We will continue to encounter conflicts around AI and GDPR unless we begin considering these types of issues in advance,” said Armstrong. “When we try to shorten the delivery cycle, people will take shortcuts.”
Many organizations are adopting AI chatbots as part of the talent acquisition process – including candidate discovery, screening, interviewing, and hiring. As this practice continues to grow, regulators are worried that this might intrude on job seekers’ privacy or introduce existing biases related to race and gender.
As a result, in the United States, Congress is considering the federal Algorithmic Accountability Act. This would require employers to perform an impact assessment of any automated decision-making system that significantly affects an individual’s access to, terms, or availability of employment.
The Impact of GDPR on Compliance Professionals
It’s becoming clear that compliance professionals will play a key role in adherence to GDPR – especially regarding AI. We must champion transparency and fully consider the intersection between our organizations’ technology and its impact on our users. Topics to consider include:
- What types of technology are we using across our organization?
- Who provides this technology, and is that organization reputable and compliant?
- How, exactly, does the technology work?
- Is the technology fair and impartial?
Only by educating ourselves on this information can we protect our employees and users – and mitigate the risk of GDPR-related fines.
Preparing a Plan of Action for Your Organization
Said Armstrong, “Something I’ve noticed across all of my clients who are doing well with respect to GDPR is that they understand that bad things will happen, and they have implemented a plan to stop them.” He suggests compliance professionals take small steps to address potential issues.
1. Complete a data protection impact assessment (DPIA).
Review the impact of your current tech stacks, including the types of information they collect and how they use it. Put a procedure in place to carefully consider the risks of technology that will be launched in the future. A formal DPAI may provide the legal basis for some of the proposed uses of technology solutions, including AI.
“This is becoming more important than ever, especially given the potential impact of breaches,” said Armstrong. “A recent security breach at one organization impacted 900 corporations.”
So, even if your systems are foolproof, your outsourcers may not be. And organizations are outsourcing critical business needs – payroll, travel, time management, customer interactions, and more. As a result, we must be more thoughtful about reducing and acknowledging risk and dealing with bad things when they happen.
2. Put systems into place to address current and future issues.
No matter how diligent your organization is, bad things can happen. You need to create procedures that will make an immediate impact – especially since most organizations only have 72 hours to provide regulators with a report after they have been warned of a GDPR issue.
“Organizations need to keep it simple,” said Armstrong. “Your policies and procedures should be straightforward and to the point, just like the exit signs on the back of a hotel door. In an emergency, you need to understand how to get out. That’s it.”
3. Rehearse your response plan as an organization.
Once you’ve established simple policies and procedures to safeguard your organization against risk, rehearse them. Your whole organization must understand how to respond instinctively to a crisis.
GDPR and Ethical AI: Thinking About What’s Next
As generative AI becomes more advanced and widely adopted, organizations must develop and update governance around its usage in the workplace, considering the security, privacy, confidentiality, and ethical implications. For some organizations, the response will be to lock down its use altogether. However, this will only create adverse incentives.
We don’t need to regulate generative AI’s existence; we need to regulate and govern its use.
Having the proper governance structure around the development and use of AI, which includes policies and procedures, education,* and testing and monitoring, is critical. Companies should start with a risk assessment, understanding the risks generative AI poses to and in their company—misuse, misappropriation, bias, or plagiarism—before creating policies for employees around corporate use.
And those policies, once created, must be clear and prescriptive at the start, explaining to employees what is and is not permitted. AI education and training will also help organizations explore the fundamental principles of AI governance, including common uses and benefits, potential for bias, and the global AI regulatory landscape.
Above all, creating a holistic generative AI governance structure that is sustainable, trustworthy, and transparent will require shared accountability between those developing the tool and those using it. Stakeholders must come together to understand the risks, how they manifest themselves, and what protocols are, or should be, put in place.
Compliance professionals shouldn’t shoulder all responsibility, but they can bring together the correct stakeholders to start the conversation. Generative AI offers a massive opportunity for organizations, so all employees have a part to play in regulating its use to ensure we’re developing and using it responsibly.
* Skillsoft courses are intended to guide and incorporate best practices that derive maximized value from the use of artificial intelligence. They are not intended to endorse or advocate for the methodologies, tools, or outcomes of the artificial intelligence tools referred to or utilized.
Skillsoft courses are intended to guide and incorporate best practices that derive maximized value from the use of artificial intelligence. They are not intended to endorse or advocate for the methodologies, tools, or outcomes of the artificial intelligence tools referred to or utilized.