[Interview] For more human-centered AI, ethics should be a part of the development process

Posted on : 2023-06-23 18:35 KST Modified on : 2023-06-23 18:35 KST
The Hankyoreh interviewed James Landay, the vice director of the Stanford Institute for Human-Centered AI, prior to his participation in the second Hankyoreh Human & Digital Forum
James Landay, vice director of the Stanford Institute for Human-Centered Artificial Intelligence.
James Landay, vice director of the Stanford Institute for Human-Centered Artificial Intelligence.

In a blog post in March, Bill Gates wrote that two demonstrations of technology struck him as revolutionary: graphical user interface in 1980, and OpenAI’s ChatGPT. With these words, he forecast the massive waves ChatGPT would make across the world.

Gates predicted that just as the graphical user interface-based Windows operating system allowed the layperson who doesn’t know any programming language to use computers, effectively touching off the information revolution, the appearance of technology that allows one to operate artificial intelligence in natural language would open the door to a new world in which anyone and everyone could make use of AI in a variety of ways.

As the ChatGPT sensation sweeps the globe, James Landay, the vice director of the Stanford Institute for Human-Centered AI, has come to Korea to participate in the second annual Hankyoreh Human & Digital Forum. A co-founder of the institute, Landay is a professor of computer engineering at Stanford University and a foremost expert in the field of human-computer interface. At the forum, Landay gave a talk on the topic “Why AI for Good isn’t Good Enough: A Case for Human-Centered AI.”

The following is an interview the Hankyoreh conducted with Landay over email prior to his participation in the forum.

Hankyoreh: You’ve suggested that human-centered AI should get its ideas from the human brain. Wouldn’t AI developed with ideas from the human brain become a dangerous humanoid, similar to humans, that could replace or threaten humans?

James Landay: The Stanford Institute for Human-Centered AI has three principles that guide our work: (1) Because of the large human impact of AI, we should involve interdisciplinary experts from the start and involve them in all stages of AI development and ensure to inject ethics and an understanding of the social impacts into everything that we do, (2) AI will disrupt some of the workforce and government should develop programs to account for that, but we should put our focus on applications that augment rather than replace workers; (3) Although deep learning has led this current stage of AI breakthroughs (i.e., large neural networks), we should look to the human brain for inspiration on new ideas and algorithms as we create more sophisticated AI systems and underlying technologies.

Hankyoreh: You have pointed out the limitations of the way social scientists criticize AI its use has become widespread. You emphasize the idea of “embedded ethics,” which is applying ethics to AI in the development and planning stages. What are some specific ways that embedded ethics can be implemented?

Landay: I do not think that social scientists and journalists should not look for problems with existing AI algorithms after they are put out in the public. It is simply that this approach is not enough. Yes, we should point out these problems, but we should also focus on ways to prevent as many of these problems from possibly occurring in the first place.

Embedded ethics is an educational approach where we try to embed short ethical lessons directly into the traditional course content (e.g., in an advanced course on natural language processing) rather than students only learning this material from a single ethics course. (Our Stanford undergraduate program does both!) Embedded ethics by itself is not enough either. Again, I propose a different approach to the entire AI application design process.

Hankyoreh: You have said that human-centered AI should be approached at the user, community, and society levels. However, there are great and intrinsic differences and needs between different users and groups. Different societies pursue different values and standards depending on their political system, culture and language. Therefore, do you think various types of AI should be customized for each user, group, and society? Or can universal needs and values be applied?

Landay: User-centered design, the technique that is used by most software companies today, is the first step and this process already supports the development of products that work well for a particular user group. This technique also advocates that software applications will have to be designed specifically for different user groups. On a broader scale, software internationalization is a technique that tries to adapt software design across countries. Unfortunately, the idea of software internationalization rarely goes beyond simply adapting the language of a product rather than the underlying values embedded in the design of the product.

To truly create human-centered AI we will need to use methods that can try to tease out these different values at the community and society level and design AI systems with the appropriate values in mind. Although there may be some universal needs and values, that will not be enough for many, if not most, products.

Hankyoreh: There is a lot of public concern about job security brought about by AI. How valid are these concerns?

Landay: Although it might not be true for all jobs, for many of today’s jobs that use information technology heavily (i.e., “knowledge work”), I believe using AI in those jobs will be required to achieve both efficient and high-quality work. Again, those workers that don’t learn AI and use it in their jobs will be more at risk of losing their job or slowly advancing in their careers compared to workers who learn to use these technologies to their advantage. So, this does indeed imply that much of our population, especially those in these types of “knowledge work” jobs, will need to learn how to apply AI in their own work to stay competitive.

Hankyoreh: What changes do you think conversational AI will lead to in terms of human-computer interaction?

Landay: Like many previous human-computer interaction advances (e.g., the graphical user interface), the conversational interface will not replace all instances of the previous generation. There are some things speaking or typing on a computer is good for (e.g., asking a question I’d like an answer to) and there are some situations where it is not the best choice (e.g., drawing the exact layout of the rooms in my house or showing me where I am while driving using a map rather than words).

In addition, sometimes using language together with a more direct way of specifying particular objects is more efficient than either interface “modality” alone. For example, in a multi-modal interface, I might point at a chair in a room and say, “Order me another chair like that one.” This is more efficient than using language alone to complete that command. We will see many instances where we will use conversational interfaces in the future (e.g., think Alexa or Google Home on steroids), but we will also use GUIs and multimodal interfaces for many things that we already use GUIs for today.

Hankyoreh: Some people have criticized the algorithms applied to social media as being designed to drive addictive use by users for profits. This seems to be an example of an abusive development of HCI. How can human-centered AI help ameliorate these problems?

Landay: Unfortunately, human-centered AI or embedded ethics education will not solve all problems. Nor will self-regulation by industry. Government policy and law will also need to be brought to bear to help ameliorate some of the problems. Purposely harmful algorithms that are negatively impacting society, either by encouraging disinformation and violence or by encouraging addition, may require legal remedies to stop and prevent these misuses of technology.

Hankyoreh: Tens of thousands of people, including yourself, have signed an open letter to the Future of Life Institute (FLI) calling for a six-month moratorium on AI research and development and for safety protocols to be put in place. Where do you stand?

Landay: Although I signed this letter due to concern for some of the near-term risks of AI, I do not fully agree with some of the claims that were put forth in the letter about AI eventually becoming all-powerful and potentially leading to the end of human civilization (e.g., they write, “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” – I disagree with these predictions.)

On the other hand, the letter started out stating some of the more near-term risks of AI (e.g., “Should we let machines flood our information channels with propaganda and untruth?”). This risk is one that I think we need to really be watching out for right now. Although I didn’t feel any companies would stop their ongoing AI research and I disagreed with some of the extreme predictions in this letter, I felt drawing public attention to the risks of AI was important and that this letter would gain that attention. I believe the letter did accomplish that goal.

By Koo Bon-kwon, director of the Human & Digital Institute

Please direct questions or comments to [english@hani.co.kr]

button that move to original korean article (클릭시 원문으로 이동하는 버튼)

Related stories

Most viewed articles