Technology advancement achieved velocity that is very difficult to keep up. New technologies demand the shift of the models used to approach problems that necessarily appear as side effects of such advancement. Technology that attracted a lot of attention lately comprises complex algorithms under the mask of big data, artificial intelligence and some of its forms such as machine learning and deep learning. Computer algorithms exist as long as computers do. However, they only recently achieved the level of complexity that allows them to significantly influence lives and rights of individuals.
This is where regulators come in, in line with their task of maintaining justice within society, by restraining the mentioned technologies and keeping them under control. However, these complex algorithms don’t only create problems. Algorithms can also contribute to the solution of the problems they themselves are a part of via the so called algorithmic regulation. Regulators worldwide displayed interest to use the possibility of utilising algorithms in the process of decision making. This includes, for instance, the use of algorithms for optimising decisions on the resource allocation or passing the acts that predict the problems. Another example of use of algorithms in law can be found in a very promising field of use of algorithms in a judicial activity.
Let us begin by pointing out some adverse effects of algorithm activity. First and probably the most important group of effects arises due to the algorithmic bias. These are the instances where discriminatory decisions are being made based on certain characteristics of individuals. There are thereby in practice known cases of discrimination via algorithmic decisions through. For example, employment, approving loans, likelihood of getting a prison sentence and similar. Considering the capacity of algorithms to independently learn, minor emanations of (un)conscious biases can evolve into extreme systems of values. As a living example, we refer to Microsoft’s bot Tay, that within 24 hours of its launch developed fascist outlook.
Following group of adverse effects of algorithm is, so called, algorithmic manipulation. Based on the data gathered from certain group of people, general conclusions on humans are being made and are then used to place, for example, news.
Furthermore, there are adverse effects of algorithms that could be classified into following groups:
- Algorithmic law violation – where algorithms are built to intentionally deceive regulators and authorities in general. For example, rigging prices, use of algorithms in propaganda or in campaigns of disinforming, and for election manipulation.
- Algorithmic advertising scams and brand slander – by placing advertisements of a company next to some form of hate speech or terrorist messages.
- Algorithmic unknowns – learning capacity of computers can make them too complex for human understanding, which is accompanied by significant uncertainty. It is not hard to imagine some other forms of threat originating out of algorithms, but the difficulties of the mentioned are especially emphasized.
It has become, therefore, apparent that there is a need for regulating algorithms because their unconditional use brings forth imbalances and injustice in society.
General Data Protection Regulation
The use of artificial intelligence systems, which are one of the basic emanations of “smart” algorithms, demands regular data processing on a large scale. The capacity of artificial intelligence to independently gather data and then learn and draw conclusions from it makes the whole picture immensely apt for risk. In that regards, one kind of regulation of algorithms has already occurred in the form of the General Data Protection Regulation (GDPR).
Out of requirements set forth by the GDPR, it is important to emphasise the right of an individual “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. Herein “automated processing” means precisely data processing via some algorithm. Aforementioned right of an individual exists only if the decision is made solely through automated processing.
Profiling individuals is also of an extreme importance to this topic. There are many systems that perform automated data processing. However, most often algorithm performs profiling as a special kind of automated processing.
What is profiling? The GDPR defines it as follows:
“‘profiling’ means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”
Algorithms are capable of drawing conclusions on individuals based on personal data. Organisations that adopt a data processing system based in some algorithm have the ability to draw conclusions about persons that are intimate in their nature. Such data processing represents significant risk to individual’s right to privacy.
Due to everything aforementioned, there is a probability that automated processing would result in a risk for the rights and freedoms of individuals. Therefore, it is often required to perform data protection impact assessment.
Algorithm Accountability Act
Taking into consideration the relevance of topic on “smart” algorithms and their influence on human lives, on 10th April 2019, Algorithm Accountability Act (AAA) has been proposed in the USA. Its purpose is to regulate automated processing of personal data. AAA shall authorise the Federal Trade Commission (FTC) to require from subjects that use, store, or share personal data to conduct impact assessment of automated decision systems and data processing impact assessment in certain instances.
Automated decision system is a computing procedure, including the one done by machine learning, statistics, or some other data processing technique or artificial intelligence, that makes decision and facilitates decision making performed by humans. Impact assessment is analysis used to assess the effects certain procedure of creation of automated system has on the accuracy, fairness, bias, discrimination, privacy, and security. Entities subjected to these rules would be any person, association, or corporation over which the FTC has jurisdiction, and which makes more than $50 million per year, possesses or controls personal information on at least one million people or devices, or primarily acts as a data broker that buys and sells consumer data.
An assessment of the relative benefits and costs of the automated decision system in light of its purpose would be performed.
It is necessary to take into an account:
- Data minimization practices.
- The length of time the personal information.
- The results of the automated decision system are stored.
- What information about the automated decision system is available to consumers.
- The extent to which consumers have access to the results of the automated decision system and may correct or object to its results.
- The recipients of the results of the automated decision system.
AAA introduces the concept of high risk automated decision system. That is automated decision system that poses a significant risk to the privacy or security of personal information of consumers; or the one resulting in or contributing to inaccurate, unfair, biased, or discriminatory decisions impacting consumers.
The system involves the personal information of a significant number of consumers regarding race, colour, national origin, political opinions, religion, trade union membership, genetic data, biometric data, health, gender, gender identity, sexuality, sexual orientation, criminal convictions, or arrests; and similar. Companies should, under the pressure of AAA, assess a wide range of algorithms which would cover a significant part of tech industry.
The proposal of this bill is most likely the result of the great disputes that had unfair and biased algorithms at their centre. Let us suggest some, like the lawsuit of Department of Housing and Urban Development against Facebook, which states that the system of advert targeting unfairly limits who is able to see adverts regarding real estate. For example, Amazon tool for employment that supposedly discriminated women.
Algorithms instead of judges?
Algorithms can be mentioned not only as objects of law, but potentially as sources of law. It has been shown that judicial profession is undergoing a crisis for two reasons. The first one has to do with the fact that judicial systems enjoy decreasing legitimacy over time – people do not trust judiciary. The second one concerns increasing efficiency of algorithms. The fact is that, in theory, every judicial profession could be completely performed via algorithms and therefore judges could be replaced. Although there is consensus that everything, including the very traditional professions as the legal profession, should be digitalised, question is posed to what extent should that be done.
Our legal tradition applies principle that efficiently reduces judge to a robot: they are presented with circumstances of the specific case which then they compare with abstract legal norm. If they ascertain that the abstract norm adequately describes real-life case and can be applied to it, they make a decision that the norm has provisioned in advance.
EXAMPLE OF DECISION MAKING
“Court shall convict the person that has committed a murder to x years of prison. Mark has committed a murder. Mark shall be convicted to x years of prison”.
As demonstrated, even the very reasoning of a judge is a special algorithm. This could bring us to conclusion that judges do not have any creative role – it could be said they are “machines”. Therefore, can a complex algorithm, which performs very complicated calculations in a short amount of time and is capable of independent learning, be a better judge than a human?
In the year 2016, a team of scientists created a system for anticipating the outcomes of the procedures being led or that have been led before the European Court of Human Rights. Accurate predictions amounted to a high 79%. They stated that: “Recent progress in Natural Language Processing and Machine Learning offer us tools for building a model of prediction that could be utilised for discovering patterns that lead decisions made by courts. This could be useful, for lawyers and judges, as an auxiliary tool for swift case solving.” Much like in the aforementioned research, the tool constructed within the Case Crunch Lawyer Challenge anticipated court decisions with the accuracy of 86.6%, while (human) lawyers made successful predictions in 62.3% of cases. Time required for the algorithm to make a prediction was few seconds per case, while lawyers required half an hour on average.
However, as much as, at least in theory, a judge should be a “robot” that receives data, processes it, and in line with objective and clear rules makes the decision, regardless of the values that they themselves hold, regardless of their sense of morality or their emotions, they still cannot be replaced. There are features inherent to humans that indicate that they (most likely) will never be completely replaced (legal profession included), and that especially concerns activities that involve working with people. For one, machines at this point cannot listen and feel or show empathy. At least for now, machines cannot put information into a context: social, cultural, political, historical, and alike, and therefore cannot look at the bigger picture. The capacity for intuition is inherent to humans, and it often makes that difference needed to achieve good results or to make the optimal decision.
Regardless of the fact that technological liberals detest regulation and consider it a weight that only slows down the progress, they should come to peace with the fact that regulation is impending and that it is truly necessary. The question remains how this should be done. Existing partial and general regulation on protection of personal data and competition has shown its advantages and shortcomings and we can soon expect special laws dealing with use of algorithms within sectors such as health care, marketing, and judiciary. Likewise, recent years have shown that regulatory and supervisory authorities lack knowledge, special skills, and technical resources to tackle new technologies. It is important for regulators to study this subject carefully, educate themselves, identify the risks, and then regulate this matter in a way that adequately protects human rights and freedoms, while keeping the pace with the progress of society and technology.
Marija Bošković Batarelo LL.M. Law and Technology, Domagoj Bodlaj, mag.iur