Advances in technology are making it easier for organisations to make decisions about us using computer programs. It’s obvious why automated decision-making is appealing – computers can analyse more data at faster speeds than people, and in theory they are less likely to make mistakes or display bias. But there are also risks involved. One concern is that it’s more difficult for people affected by the decisions to understand why a particular decision was made and to question the outcome. Thomas Oppé sets out the current legal situation:
All uses of personal data in the UK are regulated by the Data Protection Act. The DPA sets out the principles which an organisation must follow, and gives individuals rights to ensure their information is being used fairly. It provides some additional rights when an automated decision has been made.
The first thing you can do if you’re concerned about an automated decision is request information from the organisation responsible by making a subject access request. You can find more information about making a subject access request on the Information Commissioner’s Office website. You are usually entitled to receive a copy of the personal data which is held about you.
You are entitled to ask an organisation making decisions about you for the information they have about you and the logic involved in the decisions
If you are concerned about an automated decision you can also ask to be told about the the logic involved in decision-making, but you need to specifically ask for that type of information. Organisations might approach this requirement in different ways. The phrase “the logic involved in decision-making” comes directly from the legislation, but there have been few specific cases testing the specifics of how this should be interpreted and implemented. Neither the DPA nor the European Directive provide more guidance on interpretation.
This is part of the subject access right, which requires that data be provided in “an intelligible form”. Any subject access request should automatically be assumed to include the personal data which was fed into the algorithm (i.e. data which is about the data subject – and even this is not a straightforward issue). The ‘logic’ behind a decision is listed separately and has to be specifically requested, and this indicates that you can get other information related to the operating of the system.
This information about the logic does not have to be provided to the extent that it is a trade secret, which is a significant consideration in a commercial context (e.g. credit scoring), but may have less weight when government data is involved. As a starting point this information might include:
The extra rights covering automated decision making might help you further, but they only apply in limited circumstances. You can only use them if a decision is made automatically. In reality, lots of decisions might use an automated process – such as giving you a score based on certain information – but there is often a person somewhere confirming a final decision. In these cases the other requirements of the DPA still apply.
If you are worried about an automated decision, you have the right to ask that it is reviewed by someone at the organisation. You can also contact the organisation before a decision is made, asking them in advance not to decide something only using an automated process. This doesn’t guarantee a different outcome but it does mean you don’t have to only rely on the computerised process.
However your data is used, ultimately it is the organisation’s responsibility to do this fairly and to adequately explain their decisions. In general terms this means you should have an understanding of how and why the information is processed, and that it shouldn’t cause you unwarranted harm or detriment. How exactly this will be implemented may have to be developed and adapted in response to changing technologies and uses.
If you’re concerned about how your personal information has been used you can contact the Information Commissioner’s Office. They regulate the DPA and can investigate complaints about how information is used, as well as giving advice to individuals and organisations on their rights and responsibilities.
Shadow of the Smart Machine is a series of blogs and events bringing together key representatives from government, business, and academics in law, computer science & social sciences to explore the emerging ethical, accountability, and regulatory concerns surrounding algorithm-supported decisions in government, with a special focus on machine learning.