We use a lot of electronic gadgets in our lives, revel in Artificial Intelligence, and welcome the presence of robots. This trend is likely to increase in the future, as we continue to allow them to make many decisions about our lives.
For long, it has been a common practice using computer algorithms for assessing insurance and credit scoring among other things. Often people using these algorithms do not understand the principles involved, and depend on the computer’s decision with no questions asked.
With increasing use of machine learning and predictive modeling becoming more sophisticated in the near future, complex algorithm based decision-making is likely to intrude into every field. As such, expectedly, individuals in the future will have further reduced understanding of the complex web of decision-making they are likely to be subjected to when applying for employment, healthcare, or finance. However, there is also a resistance building up against the above, mainly in the EU, as two Oxford researchers are finding out from their understanding of a law expected to come into force in 2018.
With increasing number of corporations misusing data, the government is mulling the General Data Protection Regulation (GDPR), for imposing severe fines on these corporations. GDPR also contains a clause entitling citizens to have any machine-driven decision processes explained to them.
GDPR also codifies the ‘right to be forgotten’ while regulating the overseas transfer of private data of an EU citizen. Although this has been much talked about, not many are aware of two other clauses within GDPR.
The researchers feel the two clauses may heavily affect rollout of AI and machine learning technology. According to a report by Seth Flaxman of the Department of Statistics at the University of Oxford and Bryce Goodman of the Oxford Internet Institute, the two clauses may even potentially illegalize most of what is already happening involving personal data.
For instance, Article 22 allows individuals to retain the right not to be subject to a decision based solely on automatic processing, as these may produce legal complications concerning them or affect them significantly.
Organizations carrying out this type of activity use several escape clauses. For instance, one clause advocates use of automatic profiling—in theory covering any type of algorithmic or AI-driven profiling—provided they have the explicit consent of the individual. However, this brings up questions whether insurance companies, banks, and other financial institutions will restrict the individual’s application for credit or insurance, simply because they have consented. This can clearly have significant effect on an individual, if the institutes turn him or her down.
According to article 13, the individual has the right to a meaningful explanation of the logic involved. However, organizations often treat the inner working of their AI systems and machine learning a closely guarded secret—even when they are specifically designed to work with the private data of an individual. After January 2018, this may change for organizations intending to apply the algorithms to the data of EU citizens.
This means proponents of the machine learning and AI revolution will need to address certain issues in the near future.