Privacy and ethics: hand in hand
AI raises a number of interesting and complex issues when it comes to data protection and ethics. At our recent TLT Digital Futures event, AI: a new frontier, data privacy specialists Emma Erskine-Fox and Brian Craig tackled some of these big questions. We outline below some of the key issues addressed during the session.
AI relies on huge volumes of data and often that data contains personal data. But processing such large volumes of data could be at odds with the legal framework contained in the General Data Protection Regulation and the Data Protection Act 2018, which is based on ethical principles. As Elizabeth Denham, the Information Commissioner, stated: "Ethics is at the root of privacy and is the future of data protection".
Recent headlines draw out some of the main ethical and data protection challenges raised by AI. Interesting examples include the babysitter vetting app, Predictim, designed to create a risk rating based on a potential babysitter's social media and web presence (without his or her knowledge). China's social credit scoring system is another example of how significant decisions are being made automatically about people without any human input and without transparency, explanation or recourse.
Headlines like this illustrate some of the key challenges:
This is a key data protection principle but AI raises two main hurdles. The first is machine learning: the algorithm is learning by itself, so how do you know what data it is processing? If you can get over this hurdle, how do you explain how the algorithm works to individuals in a way they are going to understand?
The legal framework places restrictions on automated decision-making. It also provides individuals with rights to be informed of the logic behind such decisions as well as rights to challenge the decisions.For the same reasons as above, explaining automated decisions in a meaningful way can be very difficult and is something that the Information Commissioner's Office is currently looking into with the Alan Turing Institute.
As the AI adage goes, "bad data in, bad data out". It is important to keep auditing the data to make sure that AI is not producing inaccurate results, such as inaccurate profiles or predictions.The size of the data sets involved could make this challenging.
AI is most effective when you've got a big database to examine. This may be at odds with the principles of minimisation (only using the absolute minimum data necessary) and retention (only keeping the data for as long as necessary). Moreover, it may be difficult to set and adhere to retention limits if AI requires historic data to function properly.
As the market grows for data set modifiers (who clean, reorganise and aggregate data for AI solutions), data controllers are asking if they are permitted to transfer data for this purpose. This raises several questions: how do you organise your data when you may want to use it differently in the future? How do you retain control over your data when it is being processed by numerous different players in the AI landscape?
Should the human element ever disappear from cybersecurity? Or if the biggest threat is human error, are we better stepping aside?
So how can you tackle these challenges? It is critical to consider ethics at all stages of AI development and implementation. Consider asking yourself the following questions:
The answers to the above questions can feed into your data protection impact assessment, which is a key compliance tool when considering processing personal data using AI. This can help to give you a head start on ensuring privacy compliance in your AI initiatives.
If you are interested in joining our next Digital Futures event, please contact Philippa McFeat.