The Unprecedented Downside of Artificial Intelligence: Proxy Discrimination
April 23, 2021
Processing lump data has been passed over to systems, specifically to artificial intelligence. The process becomes easier, faster, and more convenient for awaiting consumers.
Despite its benefits, mistakes and loopholes in the algorithms are possible. These miscalculations will then turn to possible harm to unsuspecting population groups.
AI technology, particularly machine learning (ML), relies largely on following patterns and detecting unusual activity. This routine causes it to reinforce discriminatory patterns.
Although the pattern is seamless, and the algorithms are seemingly perfect, unfair bias and discrimination can be directed to a particular group. This phenomenon is referred to as proxy discrimination.
What is Proxy Discrimination
An established psychological test forces participants to categorize terms under two sets as fast as they can. In this test, people are forced to make unnecessary discrimination just like what happens in proxy discrimination.
Policies and laws are set in place against discrimination. There are policies against discrimination in the workplace against gender, ethnicity, age, among others. However, the use of modern AI, ironically, makes it difficult to point out discriminatory cases. As a matter of fact, it can be an instigator.
How Proxy Discrimination Heightens Discrimination
The proxy discrimination that results from the use of machine learning causes a disparate impact. Disparate impact, legally, is a practice within employment or housing that cause negative effects to a particular group.
A common workplace example is employers actively avoiding hiring women applicants that have children. Age-old beliefs point out that mothers cannot be good workers because they cannot focus on their work. However, several landmark labor law cases are won by the employees because it proves to be an explicit form of discrimination.
When taken into the context of artificial intelligence and proxy discrimination, all the data processed by the algorithm is also automated. If the data is gathered through web scraping, human intervention is no longer necessary. This means that the algorithm does not actively search for that “mother applicant.”
However, because the mother applicant may present different characteristics to what is set as the normal or base group, the machine will still point out that the mother applicant is an outlier.
In this manner, proxy discrimination becomes a highly specific subset of disparate impact, where a seemingly harmless algorithm places harm to a particular group. Years ago, proxy discrimination was considered a subset of intentional discrimination.
But, given the nature of AI, it was concluded that it is not intentional at all. It is an unfortunate by-product of advanced technology.
Taking Advantage of Proxy Discrimination
Proxy discrimination may be unintentional, but users find ways to use this process to their advantage. Unfortunately, the intentional use of proxy discrimination is gearing towards making discriminatory decisions and choices seem rational and purely mechanical.
Proxy discrimination is used as a workaround by some users to eliminate the air of discrimination associated with their choice. It is like the safety net answer when the reasoning behind their decisions is questioned.
Proxy Discrimination in the Insurance Industry
AI is proving to be an essential tool for the insurance industry. It comes in handy when handling bulk data, fraud detection, rate setting, and pricing, among other things. On the flip side, there are also risks that come with it, proxy discrimination being one of them.
As previously mentioned, AI cannot intentionally discriminate strands of information. It is meant to be free from any kind of bias. However, its hyper-focus on patterns allows it to seize proxy data that is often associated with a particular group. It can be any kind of data.
Any kind of client activity can be taken out of context by the algorithm and be subject to proxy discrimination. And this action can be troublesome for the firm. Those subjected to this unprecedented discrimination can file lawsuits and demand compensation for damages.
Most firms will take extra measures to prevent this from happening. However, there are some people who will use this to their advantage. Because they are aware that proxy discrimination happens, they will willingly change the elements of the algorithm to their liking.
When they change the elements accurately, the outcome will be in line with what they need even if it is highly discriminatory. Queries about the outcome will be unavoidable but they can simply pass it off as a product of the algorithm and that they have nothing to do with it. At the end of the day, they have a legitimate reason behind a discriminatory and harmful decision.
Be Wary of the Other Risks
Aside from the dangers of proxy discrimination, users must also look out for the other dangers related to AI technology. It is always better to be safe than sorry.
When thinking about future innovations, most people equate them with robots and other kinds of automation. These innovations are starting now. Industries are preferring the use of robots in the development and manufacturing stages of their products.
Take the automobile industry for example. Instead of people, robotic arms are present in the manufacturing process of cars. Autonomous cars are now possible and commercially distributed. The immediate risk of fully automating processes is the loss of jobs that were worked by skilled employees.
Data Security Risks
With almost all data now stored in the Cloud, digital security can also be in danger when AI is adversely used. Hacking and phishing are some of the most prominent AI-related attacks on digital security. The dangers related to exploiting personal data stored in the cloud are also imminent.
The use of AI technology is a double-edged sword. While there are palpable benefits, there are pressing concerns and risks, such as proxy discrimination. Proxy discrimination is unavoidable. What is once a neutral statement or claim can inflict unprecedented harm to a particular population. There are no known countermeasures to this risk yet. It is best to be alert at all times when dealing with this technology too.