The European Union's General Data Protection Regulation rules -- GDPR for short -- are about to change the relationship between people, their personal data and how businesses handle that information.
However, there's another factor enterprises and their customers have to consider as well -- artificial intelligence.
Article 22 of the GDPR -- also known as Regulation 2016/679 -- dictates that people protected by the new rules generally cannot be subjected to purely automated decision-making, including profiling, without their consent, especially if that decision-making "produces legal effects concerning him or her or similarly significantly affects him or her."
Consequently, there are concerns that GDPR will throw an enormous monkey wrench into consumer AI use cases when it comes into effect on May 25. From a practical perspective, decision-making by machine-learning algorithms and other AI systems is not as straightforward as that of traditional systems -- making the issue of informed, explicit consent a sticky issue.
"[A]s AI systems often rely on machine learning, a disclosure of algorithms does not provide a full and thorough picture of how a decision was reached, as the learning component has not been factored in," argue Frankfurt attorneys Sven Jacobs and Christoph Ritzer in a blog post
This argument is a bit of falling-sky doomsaying. Of course everyday users are not going to understand complex machine-learning algorithms or the intricacies of virtualized networking.
Even the EU member-state data-protection authorities (DPAs) -- the agencies responsible for enforcing GDPR -- have accounted for that. Having worked together to jointly adopt "Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679" in 2017 ("Guidelines"), DPAs expressly advise organizations that the obligation to provide "meaningful information about the 'logic involved'" means that data controllers "should find simple ways" to explain various rationale and criteria at work "without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm."
The Guidelines also provide an example for how these disclosures should work:
- Details of the main characteristics considered in reaching a particular automation-reliant decision and their relevance.
- The respective sources of all such data (e.g., application forms, account details, public records, third parties, user behavior, etc.),
- "[I]nformation to advise the data subject that the … methods used are regularly tested to ensure they remain fair, effective and unbiased", and
- Contact details and related information on how a data subject can request a review of the pertinent automated decision(s).
Notwithstanding the DPAs themselves using these factors to describe a GDPR-compliant example, this still seems like a long -- and not necessarily exhaustive -- litany to present to the user -- potentially making for a disclosure so lengthy as to constitute inaccessible legalese in violation of Article 7, Section 2, of GDPR ("[T]he request for consent shall be presented … in an intelligible and easily accessible form, using clear and plain language") and Recital 32 of GDPR ("If the data subject's consent is to be given following a request by electronic means, the request must be clear, concise and not unnecessarily disruptive to the use of the service for which it is provided").
Making the law clear
Still, legal onuses for clarity and concision traditionally tend to focus more on language simplicity than on de facto imposition of an arbitrary word count; besides, simpler language naturally tends to lead to shorter clauses.
AI-using data processors can take further comfort in the fact that additional guidance on consent-worthy language can be found in Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts ("Directive 93/13/EEC"). Recital 42 of GDPR expressly states that Directive 93/13/EEC -- itself requiring consumer contracts to be written in "plain, intelligible language" -- governs the mandate of providing "an intelligible and easily accessible form, using clear and plain language [without] unfair terms" in boilerplate consent declarations.
In other words, complying with the complicated consent requirements under GDPR for AI-based decision-making should theoretically present no greater burden than complying with the same requirements under 25-year-old EU contract law.
Moreover, this being the age of digital media, the Guidelines go on to recommend a variety of innovative techniques to make AI processing of personal data at once more GDPR-compliant and more user-friendly, such as:
- Layered, step-by-step notifications -- including short-form notifications with expandable links to the "full version," combined with "a just-in-time notification at the point where data is collected";
- Graphics, charts, and other "interactive" multimedia methods to better explain algorithmic function; and
- Standardized icons to describe what information is being used when, shared with whom, and/or to decide what.
All of this adds up to one key takeaway for enterprises: As long as you get your data lineage in order -- which you should be doing anyway -- and make a decent effort to identify decision fundamentals to data subjects from whom you're seeking consent, AI and GDPR can coexist.
—Joe Stanganelli, principal of Beacon Hill Law, is a Boston-based attorney, corporate-communications and data-privacy consultant, writer, and speaker. Follow him on Twitter at @JoeStanganelli.
(Disclaimer: This article is provided for informational, educational and/or entertainment purposes only. Neither this nor other articles here constitute legal advice or the creation, implication or confirmation of an attorney-client relationship. For actual legal advice, personally consult with an attorney licensed to practice in your jurisdiction.)