Teal blue header image

Uber case highlights risks of automated decisions about employees

A new case against Uber could have major implications for any business using automated decisions or algorithms to make decisions about its employees.  

Two UK drivers have started legal proceedings to access their personal data and understand how Uber makes automated decisions that affect them. These decisions, the drivers argue, are based on their use of the app, their location, driving behaviour and communication with customers and Uber’s support team. The drivers believe they impact job allocation and pay rates. 
 
They claim that Uber has failed to comply with its obligations under the General Data Protection Regulation (GDPR) by failing to provide full access to the drivers’ personal data, and failing to provide complete information about its automated decision making. 
 
Following the recent controversy over the allegedly discriminatory algorithm used for A Level results in England, organisations should be clear what their obligations are when using personal data in this way and the key takeaways from this case.

Your obligations 

Under the GDPR, data subjects have the right to be informed of:

  • the existence of automated decision making;
  • meaningful information about the logic involved; and 
  • the envisaged consequences for the data subject. 

The GDPR gives individuals the right not to be subjected to a decision based solely on automated processing (i.e. with no human involvement) that produces legal effects concerning or significantly affecting them. 
 
There are circumstances where this right does not apply – for example, where the individual explicitly consents to the automated decision making. 
 
In these circumstances, employers must implement suitable measures to safeguard the individual’s rights and freedoms and legitimate interests. The individual should also have the right to obtain human intervention, to express their point of view and to contest any automated decision.

Three key takeaways for employers 

1. Have you assessed the risk of using automated decision-making?

A Data Protection Impact Assessments (DPIA) is almost certainly required if you intend to process personal data using AI systems. You must carry out a DPIA before the processing starts, should identify the level of risk involved, and put in place suitable mitigations. You should review the DPIA regularly, particularly when there is any change to processing. 

2. What are you telling your employees? 

Transparency and accountability are overarching principles of the GDPR. You should ensure your privacy policy is clear in relation to automated decision making. It should contain all the information required under the GDPR. The Information Commissioner’s Office says “meaningful information about the logic” should not be a confusing explanation of the algorithm. It should simply describe the type of information collected and why this information is relevant.  

3. What happens if a decision or process is challenged?

Organisations should be aware of the risks in using automated decision making (for example, it may lead to discrimination) and should adopt measures to safeguard against them. 
 
If an employee is not happy with the process used by their employer, they will have the right to ask for human intervention and to contest the decision. Employers should have a process in place that enables employees to do this.
 
Ed Hayes is legal director and Sarah Wall is trainee solicitor at UK law firm TLT  

This article was first published by People Management 

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at September 2020. Specific advice should be sought for specific cases. For more information see our terms & conditions.

Insights & events View all