[Interview] How to remedy biases built into algorithms

Posted on : 2023-06-23 16:30 KST Modified on : 2023-06-23 18:01 KST
We spoke to Frank Pasquale ahead of his presentation at the second annual Hankyoreh Human & Digital Forum in Seoul
Frank Pasquale, professor of law at Brooklyn Law School.
Frank Pasquale, professor of law at Brooklyn Law School.

Frank Pasquale is an expert when it comes to artificial intelligence (AI) algorithms, powerful technology that sways our lives and society. A professor at Brooklyn Law School and the author of the book “Black Box Society,” Pasquale says that while algorithmic decision-making has become powerful enough to decide many people’s fates when it comes to hiring and more, that process is largely opaque and carries a major risk of proliferating biases and prejudices against marginalized groups.

In order to prevent this, the professor proposes state and private audits of algorithmic decisions. The Hankyoreh interviewed Pasquale over email prior to his participation in the second annual Hankyoreh Human & Digital Forum in Seoul on June 16.

Hankyoreh: In the era of ChatGPT, algorithms have infiltrated and dominated our lives. You seem to be very critical of this development. Why?

Frank Pasquale: I think that a lot depends on the area in which the algorithms are used. For example, I applaud medical research that is designed to use pattern recognition to identify cancers more quickly.

The focus of “Black Box Society” was on another realm of algorithms: algorithms that rank, rate, sort, and evaluate people. I divided these algorithms into three categories: reputation algorithms, which claim that they can assess the quality of a person; search algorithms, which rank content, including websites; and finance algorithms, which perform many functions with respect to money (such as evaluating the quality of investments, or executing trades).

In those areas, my worries are that older, slower, and more accountable interactions among people are being replaced by faster and unaccountable decisions by machines. For example, an algorithmic lender might use inaccurate or inappropriate data to determine whether or not you get a loan, or what the interest rate should be.

Hankyoreh: Discrimination is better identified and remedied on online platforms than in offline markets, according to some. Do you see any positive aspects of algorithms?

Pasquale: Sometimes platforms can be extremely opaque, hiding how they are treating their workers or consumers on the platform. The best way to leverage the positive potential for algorithms is to require reporting on how they are affecting diverse, vulnerable groups who have historically been the victims of discrimination.

Hankyoreh: There’s been a rapid proliferation of AI interviews based on the idea of fairness – that is, that they can cut down on biases and discrimination. Do you think this is true?

Pasquale: The problem is that in many cases, the machines are using data that was prepared by humans. And this data can be quite discriminatory.

Indeed, the law professor Ifeoma Ajunwa has argued that automated video interviewing is “a new phrenology.” The old phrenology baselessly presumed to make judgments based on the shape of persons’ heads, and it was quite popular in the 19th century. The new phrenology can deny persons opportunities simply because of their accents, or because the way they express emotions is different than the optimal emotional expression in the dataset used to determine who is an optimal job candidate and who is not. This leads to what Ajunwa has called the “paradox of automation,” which is that efforts to reduce discrimination by automating hiring can in fact end up increasing discrimination.

Hankyoreh: How serious are the issues of bias and minority discrimination concerning the dangers of algorithms?

Pasquale: There are serious risks of bias and discrimination against minorities. For example, if a firm has largely promoted one type of person to management, if it looks for “traits in general” to be discerned by AI, it may just find that same person, even if the traits it is effectively looking for (like majority race, maleness, majority religious background, etc.) don’t contribute to performance. Amazon stopped using an AI recruiting tool for this reason.

Hankyoreh: How do we remedy these sorts of problems?

Pasquale: There are two important steps we need to take. First is to encourage audits, both governmental and private, of the results of automated processes designed to rank, rate, and evaluate persons. Second, I think we need to give individuals at least some opportunities to apply outside of the algorithmic process.

Hankyoreh: You emphasize the open use of technology as an alternative to secretive algorithms. How is this possible?

Pasquale: One example of a more public algorithmic system is the Canadian immigration “points” algorithm. This algorithm lets persons estimate how likely they are to be able to emigrate to Canada, and assigns points to things like age (younger is better), ability to speak French and English, and education. We can contest whether the algorithm is fair. But we can only do so effectively because it is open, and we need more models like this as more complex algorithmic systems start to be used in government decision-making.

Hankyoreh: How can algorithms help humans and coexist with them?

Pasquale: Those who are designing and implementing algorithms should always bear in mind the dignity and well-being of the persons they are judging. When algorithms decide which benefits, such as welfare, a person is eligible for, they need to be transparent and persons who are adversely affected need to be able to challenge them.

Hankyoreh: What is the role of the state in regulating algorithms?

Pasquale: One of the ideas I have explored with the Italian legal scholar Gianclaudio Malgieri is a licensing system for AI, which would require new, high-risk models to be licensed by a government authority before they are released widely. This type of licensing could help ensure a human-directed AI future, rather than a future dominated merely by the pursuit of profit.

By Han Gui-young, Hankyoreh Human & Digital Institute researcher

Please direct questions or comments to [english@hani.co.kr]

button that move to original korean article (클릭시 원문으로 이동하는 버튼)

Related stories

Most viewed articles