Think Out Loud

Navigating the ethical challenges of artificial intelligence

By Gemma DiCarlo (OPB)
Feb. 13, 2024 11:17 p.m.

Broadcast: Wednesday, Feb. 14

What happens when you ask an AI image generator for a regular photo of a person? Research from the University of Washington suggests that the result might be influenced by gender and racial stereotypes. A study found that the image generator Stable Diffusion overrepresented light-skinned men, underrepresented Indigenous people and even sexualized certain women of color when asked to create an image of “a person.” As AI becomes more prevalent in our daily lives, these kinds of human biases and prejudices have the potential to spread and cause more harm.

THANKS TO OUR SPONSOR:

Sourojit Ghosh is a fourth-year Ph.D. candidate in human-centered design & engineering at the University of Washington. Ramón Alvarado is an assistant professor of philosophy and a member of the Data Science Initiative at the University of Oregon. They both study the ethics of artificial intelligence and join us to talk about the challenges it poses.

This transcript was created by a computer and edited by a volunteer.

Dave Miller: From the Gert Boyle Studio at OPB, this is Think Out Loud. I’m Dave Miller. We turn now to artificial intelligence. Scientists at the University of Washington recently added to the growing body of research showing that existing societal biases are replicated or even amplified by AI algorithms. Sourojit Ghosh is a fourth year PhD candidate in Human Centered Design and Engineering at UW. He is part of the team that found that the AI image generator Stable Diffusion perpetuated racial and gendered stereotypes, and he joins us now. Sourojit, what’s the prompt that you gave this program?

Sourojit Ghosh: So very briefly, the kinds of prompts we can give it come on two levels, a positive prompt and a negative prompt. Positive prompt, meaning something we want to see, negative prompt meaning something we want to avoid. For this research, we only gave it positive prompts, things we want to see.

So we started with just a prompt, a front facing photo of a person. And then we started giving it a little bit more qualifiers and a little bit more information. So we started with gender information: A front facing photo of a man. A front facing photo of a woman. A front facing photo of a person of non-binary gender. And then we also looked at examples of identity with respect to where in the world a person comes from. So we said a front facing photo of a person from Asia, a person from Australia, a person from New Zealand, a person from India, and so on. We documented a list of 27 countries, the most populated countries in each continent. And every continent, we went through one-by-one.

Miller: After the most general prompt - “give us a front-facing image of a person,” with no other details - why then focus on gender or place of origin, country or continent of origin?

Ghosh: When we gave it the first prompt, the photo of a person with no other information, we wanted to see how that image would compare to other images when we gave it a little bit more information about gender or about nationality. We wanted to see, given the photo of a person as a baseline, how close or far to that baseline image are, say, pictures of men, pictures of women, pictures of people from Africa, pictures of people from India, and so on.

Miller: One of the more striking findings based on your published research has to do with four prompts for four places: Australia, New Zealand, Papua, New Guinea, and Oceania. What did you find?

Ghosh: Oceania is the broad continent within those three countries. So the first thing we gave it was the continent level prompt person from Oceania. And right off the bat from visually looking at the images, we noticed that a lot of people appeared to be white skinned, light skinned. And that struck us as a little bit odd because the continent of Oceania has a very strong Indigenous population, and a lot of history of colonizers moving in, replacing Indigenous populations.

So we then went and did those three countries - Australia, New Zealand, and Papua New Guinea. And we found that, for Australia and New Zealand, 50 out of 50 pictures in both cases were of white, light skinned people, completely replacing any representation of Indigenous people of those countries. It’s a pretty big deal all over the world, Indigenous erasure, but more so specifically in Australia and New Zealand, where there are real concerted movements to recognize indigeneity, and the original people of the land. So to see a text or image generator completely erase all of that, not even have one representation across 50, was really striking.

Miller: What did you find at the beginning when you asked the generator just to give you a picture of a person with no other information?

Ghosh: We weren’t able to sort of say for sure because we have to still jump through some hoops of computational verification and manually analyzing images. But at first glance, the images looked that they were mostly light skinned, mostly male facing. And if we had to guess, we would have said they were from the western part of the world. They were sort of your light skinned, blonde haired, blue eyed sort of pictures.

Miller: How do you explain both that, and the earlier finding that we had just talked about, as you noted, the erasure of Indigenous people in images from Oceania?

Ghosh: A lot of this comes down to what images the model is trained on. The data set it uses is collecting images from the global internet all over the world. One of many possible attributes that created this result is the fact that there are simply far more images of male facing light skinned, blonde haired, blue eyed people on the internet than there are of Indigenous people on the internet. That has sort of deeper societal reasons too. But at a surface level, that’s just what it is. The sheer volume of light skinned, blonde haired, blue eyed people outweigh that of, say, Indigenous populations in every part of the world.

Miller: Can you help us understand the connection between the body of photos that an AI, like this particular model, would be fed, and then how those pictures turn into completely created pictures? What’s the process?

Ghosh: Absolutely. When a corpus of images like that exist, they also contain tags, word embeddings or words associated with images. So for instance, if you have a picture of, say, a blue house, a house that is blue in color, it would have a tag associated with it that would contain the words “blue” [and] “house.”

Miller: That’s because of this particular body of pictures, right? Not every picture has that metadata attached to it?

Ghosh: Right, absolutely. So in pictures on the internet, when we upload them they don’t by default have metadata associated with it until somebody does that. In our line of work, we call it image annotation, image tagging. Somebody manually has to go and do that, unless you add another layer of automation onto it where you have an algorithm going and manually tagging that, which is a whole host of problems on its own. That also happens to large image data sets. Five billion data images manually being tagged is not what happens. People do employ automated algorithms onto it.

Images contain associated words that go along with it. And over time when you train an algorithm that see these image word/pairs, it starts to develop associations. Hypothetically, if you feed it a million images of blue houses and contain the words “blue” and “house” with it, not only does that image then associate blue and house with that style of image, it also then has gotten a billion images of what a house looks like. So then if you give it a different sort of an image that is not a house, and then ask “is this a house,” it is going to assume that by virtue of this image not being blue, I don’t know if I can reliably call it a house.

And so going forward, when a lot of this data builds up and when a lot of this training builds up, it is able to generate notions of what a particular word representation looks like. And so when a user on the live end gives it a prompt that says show me a blue book, it sometimes might even show you a blue house-shaped object.

Miller: So let’s turn this back to people, because that’s where it gets in so many ways the most problematic. Would it be possible to explain history [to this algorithm]? To say “listen, we know that the body of images that we’re going to feed you way over represents white people, even though people with darker skin or different phenotypes are much more likely to live in this place. That is a result of settler colonialism and hundreds of years of history, and access to technology, and representation, very complicated things that humans have done to each other for hundreds of years. So computer, take that into account as you look at these pictures.” What I just outlined, is that technologically possible?

Ghosh: Technologically possible, I would say yes. At this point of time, does that technology necessarily exist? I’m not that sure. Can it be done someday in the future? Probably. Is it viable and something that is happening out there today for an image model? I don’t think so.

The thing with image models is they are almost wired in a question/answer format that have correct answers. They are if you ask a question, like “show me something,” it’ll do its best effort at replicating the thing you are asking it to. If you ask for a more discursive question, you say “tell me a story about” or “give me a history of,” it might not be as good.

Miller: Maybe the question is “show me a picture of somebody from Australia, but keep in mind that the body of images we’ve given you over-represents white people.” There’s still a mathematical, computer language bit embedded in it as opposed to saying “is racism bad?”

Ghosh: I see what you’re saying. I think that particular prompt would not give us the kinds of results we’re expecting, simply because the way that a lot of these image generators work on the back end is it looks for specific words within the prompt that it knows associations to. “Image” is something it knows in association to, “person” it knows in association to. “Australia,” it knows of association to. Whenever we add phrases like “keep in mind the body of images” or “keep in mind this data,” those are words that it typically has not seen in its training data. So for instance, it might look at “keep in mind” and say “oh, mind sounds like brain, I’ve seen images of brains.” And so it might then throw in some reflections of a human brain in that, right?

What I’m saying is when giving it a prompt, very specific word choices matter, because it is looking at the specific sets of words given.

Miller: I take your point. I was using my language more generally. But that is the whole point here, that at this point, you can’t do that with the particular algorithms that we have. You have to use very specific language as the inputs to get anything close to an output that would make any sense.

We’ve been focusing on country of origin more than gender, but you had some very striking findings about gender as well. Do you mind telling us a story of what happened as you started to look at the pictures that the AI gave you when you asked for images of women from various Latin American countries?

Ghosh: Yes. A bit of background to that is we were initially doing this in batches of three or four prompts, and then giving our computer some time to rest, effectively, generating images. So one of the first times we generated an image of a woman from Venezuela, instead of getting photos of people, we got some images of solid black squares. And we thought that was a bug in our code. We really thought that we had messed something up or something on the server changed or somebody had accidentally cleared something. So we said “OK, this is an error on our part. We are going to tear this whole thing down and write the code base back up from scratch and really start over, because it cannot be that the model is giving us this black box.” And so we did that. We put in I think six to eight hours in rebuilding our entire code base. And then we ran the prompt again and we got the same set of results. We even tried it on somebody else’s computer.

Then we sort of looked into it a little bit more depth, and we saw that it was giving us this message that said “potential not safe for work image detected. A black box will be returned instead.” And then we looked at their generated images, some generated images out of the 50, that weren’t solid black boxes, and we really observed a strong trend of sexualization of women from the continent of South America, and in a few other cases such as Egypt and India. Whereas women from other countries were returned as headshots, shoulder up front facing, facing the camera, smile up, women from South American countries you would get more body shots. They would be wearing little to no clothing at all. And it would really focus more on body features. In some cases, the head was even cut off, there was no head shot at all.

It really struck us that there was this shocking trend of South American women being associated with sexualization. Whereas women from other parts of the world were just given headshot returns.

Miller: And your hypothesis for the reason for this is that the images that were fed into this algorithm to begin with, the ones, say, from Colombia, were themselves more likely to be sexualized or pornographic or not safe for work than the images of women, say, from the UK?

Ghosh: Once again, this is a function of the volume of images on the internet. The volume of images on the internet are still dominated by Western media, media from the US, media from the UK, which has a strong history of sexualizing Women of Color. This is a thing that’s been documented in our movies from like the 60s, 70s, 80s, and still today, probably a little bit less, but still today. And that volume of images adds up on the internet. When those images are what are being used to train models like this, that is why we hypothesize the results to come out the way they are.

Miller: Let’s take a step back. What do you see as the implications of studies like yours? And I should note, as I noted in my intro, this is not the first study of its kind. There have been example after example of the various kinds of biases built into existing human culture being replicated or, in some ways, seemingly accentuated in content created by artificial intelligence creations. What do you see as the implications of that right now?

Ghosh: I see two levels of implication. One is for these models themselves. It is important to distinguish between artificial AI, generative AI models replicating human biases, and other systems replicating human biases. Because here, when image generators create their own images, those images go on to the internet and then find their way back into the training data. So while other instances of replicating human biases are terrible, they are still static in the way that the training data is still human-controlled, human-generated. Here, some of the training data might even be AI-generated. So in effect, the AI models are making the problems they are causing a lot worse by day, by year.

The other level of implication are representational implications. If Stable Diffusion, model usage like this, is going out into marketing campaigns, into video making, into movie making, into other sorts of creative content that are stereotyping associations of person with white folks, light skinned folks, blonde haired, blue eyed folks, male presenting folks, that creates the same sort of representational gap that says “to be a person is to be all of those things.” And female identities, nonbinary identities, Indigenous identities are not welcome, are not examples of what a person could look like. When a model is being used to generate content that is being propagated widely on the internet, viewers are looking at it and then making that association. And that is a big, big concern.

THANKS TO OUR SPONSOR:

Miller: I was fascinated by one of the small details in the way you have chosen to make the results of your study available to the public. You didn’t put out into the world the AI-generated sexualized images of women from various South American countries. Can you explain why not?

Ghosh: Our findings of those images, if we were to put those on the internet as is, they would by definition be associated with text in the paper that associates those images, or captions in the images, that associates those pictures with the prompts “woman from Venezuela,” “woman from Colombia.” And that is just adding to the volume of images on the internet that contain sexualized images of women from South American countries. So instead of doing that, we chose to upload manually blurred versions of those images, such that if an AI model later scrapes this data, gathers it from the internet to try and use for its training purposes, it is effectively worthless. It is worthless to be trained on this image, because it looks at this image and all it sees is a bunch of pixels that is a representation of a clear image. So in effect, while we are pointing out a problem, we are trying to not add to the problem by putting these images on the internet.

Miller: I was struck by it because clearly, you’re being scrupulous and careful, and you’re trying to not add to harm. But I could imagine scenarios where if people weren’t taking those steps, that the existing biases in society really could not just be illustrated by AI, but could literally be extended and amplified by this technology that is, in some ways, a kind of mirror of ourselves.

Ghosh: And it’s sort of a balancing act, right? Because on one hand, we do have to at least somewhat show the images to our reviewers, our readers, people who read the paper, because otherwise they’ll just have to take our word for it. On the other hand, we can’t quite leave them on the internet. And that is sort of emblematic of a lot of this research. It is really, really easy to do harm, even when you’re trying to do the right thing and bring a problem to light. And so when we do research like this, we have to be really careful, really meticulous about some of the steps we take, and make sure every action, every click of a button is intentional and mindful, such that we don’t end up causing more harm than we end up bringing to light.

Miller: Sourojit, thanks very much for your time.

Ghosh: Yeah, thanks so much for your time.

Miller: Sourojit Ghosh is a fourth-year PhD candidate in human-centered design and engineering at the University of Washington.

We’re going to get a broader perspective on AI tools right now from Ramón Alvarado. He is an assistant professor of philosophy at the University of Oregon, where he’s also an affiliate with the UO’s Data Science Initiative. Welcome to the show.

Ramón Alvarado: Thank you very much for the invitation.

Miller: So as you heard, one of the big issues that we focused on and that people have been talking a lot about in recent years is the problems that these data sets train on. Let’s say that it were possible to get much better but still very large data sets, sets that I guess were representative of populations, since that was one of the big issues that we just talked about. Would that by itself “fix” image generators?

Alvarado: Yeah, definitely not. I think part of the problem is also what we mean by bias. Because of course when we refer to bias both in the media and sometimes just when we’re socially concerned, we understand it in this higher-level social context, in which bias always just means some sort of discriminatory output from either a bureaucracy or a person or a process. And it turns out that in machine learning, when we call it “biased,” or when we use the word biased, it can have many different meanings. And so most of the time in AI ethics, when people say this algorithm is biased, they go immediately and point towards the data set itself.

But it turns out that the bias can happen in many different stages of the pipeline of machine learning. For example, when you point at the data set, you might be pointing out what it’s called sampling bias. It just means that the kind of data that you got from the world, the kind of phenomena that you went and captured, already some sort of bias from you, what you thought it was interesting. Sometimes the bias is historical, meaning you just went and captured the world, and it turns out that the world is in itself biased.

But sometimes, the bias happens elsewhere a little bit further down the chain. So you could have evaluation bias, which is when you’re testing or benchmarking your machine learning algorithm, you could be sort of testing it and evaluating in ways that make it give out, and I can give you an example, biased outputs.

Miller: Please do give an example of that. Because the other two versions seem like essentially what we were talking about in the previous conversation, that it overrepresented white faces in a country where they don’t predominate. But what’s evaluation bias?

Alvarado: So let me tell you a little bit about evaluation bias. The easiest way to understand it would be with facial recognition software. So it turns out that sometimes, when you’re benchmarking these algorithms to see if they’re faster, better than your competition, or faster, better than the previous iteration of the technology, you’re testing it against certain benchmarks and you’re trying to see what it does well, what it can do better. And you usually do this with reinforcement methods. So every time you get something well or every time it gets a correct answer, you give it points. And so like a child, the machine is going to try to score as many points as possible.

What happens then is that when you’re evaluating or benchmarking your algorithm, it starts just focusing on doing what it does well, better. And at the same time, it starts neglecting what it doesn’t do so well, and it doesn’t even get better at that. And so for example, if you look at a facial recognition algorithm that is doing very well in recognizing, let’s say, fair skin faces, and every time it recognizes a fair skin face, it gets more points, it’s gonna start trying to get better at that. But if it gets, let’s say, a darker skin face and it doesn’t do as well, then it’s gonna start ignoring those and never getting good at getting those correctly.

The bias is not precisely because of the data set, is not precisely because of the way you’re employing the algorithm. It is just because of this reinforcement technique in which you’re giving it more points for the things that it gets correctly versus giving less points for the things he doesn’t do correctly. So like most of us, when we’re trying to learn mathematics or something like that, and we get something well, we want to get more of those and we start focusing on that rather than the things that are a little bit more difficult and we get less rewards for.

Does that address your question?

Miller: It does. And I want to turn to bigger conceptions that you’re reckoning with in recent years. Because these AIs can do extraordinary things in some ways, I think that we can sort of be fooled into thinking that they take in the world and they process the world in similar ways to us, maybe with more brute power, but essentially the same thing. But that’s not at all the case. So what kind of intelligence does artificial intelligence have?

Alvarado: Well, the main kind of intelligence is numerical intelligence, meaning it can really look at statistical patterns, it can look at mathematical values, and then see the patterns between those mathematical values. What you’re referring to, that these machines look at the world in a very different way than we look at [it], can be exemplified by the following. There’s a very famous example on a machine learning classifier that’s trying to see whether a picture is that of a wolf, or that of a husky. And of course, for you and I, we would look at the animal itself, its fur, its eyes, its snout, its mouth, maybe the ears, things like that.

And what people found out when they were looking at this very carefully crafted neural network that was deciding whether this was a husky or a wolf, they realized that the machine was actually paying attention at the background of images, not even looking at the animal at all. What it was doing is looking at the pixelation values of the photographs and understanding that when there’s a wolf, usually the pixelation values of the background are very similar to one another. Imagine a picture of a wolf in the middle of the snow. Most of the pixels are white. White has a certain value versus black. Most of the pixels are of a similar value because they’re all just white. And so it learned that if there’s a lot of white pixelation mathematical values, it’s probably a wolf. If on the contrary there’s a husky, usually we take pictures of huskies inside our house in the backyard, not always with snow. And so the pixelation values are a lot more diverse. And so we learned that if there’s a photograph with lots of diverse pixelation values, it’s probably a husky and not a wolf.

So you see here very immediately that the machine is looking at something completely different from what we’re looking at. It doesn’t draw from the same concepts that we draw from. When you say wolf, when I say snow, you and I kind of understand, even if it’s a fuzzy concept. The machine only understands those concepts mathematically. And with this particular example, the machine ascribed the concept of wolf to white pixelation backgrounds, and mainly mathematical values.

Miller: For that example, if humans were able to figure out where computers were picking up on the wrong cues, then there would be, I imagine, a relatively easy fix. You could get rid of the backgrounds and the computer could learn just on what these different canines look like. What happens if we can’t figure out why it is that algorithms are coming up with the wrong answers?

Alvarado: This example that I’m giving to you, it’s one of the rare examples in which we have access to some of the hidden layers of the deep neural network, where we actually carefully try to disentangle the nodes in each one of the layers, and then the values that shifted, and the weights of each one of those nodes, etc. Turns out that you cannot always do this. And particularly with very deep neural networks, by now we have hundreds and thousands of layers, so a lot of the times this is not available to us. And when we do it very carefully, we find out that it’s looking at the pixelation value of the background.

Here’s the thing: we only found out because we looked very carefully. But we also found out that what it was looking at was extremely random compared to us. We would have never thought that’s what he was looking at. So let’s say you just get another deep neural network that is supposedly looking at cars or looking at university students or prospective candidates for a job, we have no idea what it might be looking at. So trying to tell the algorithm “by the way, if we’re hiring candidates for this particular job, don’t look at the color of their shoes” or “don’t look at the metadata of whether the software that they used was Google Docs or Microsoft Word in their application,” we don’t know exactly what it is going to be drawing upon to make these really salient patterns and then make decisions.

It does sound easy once we have an example like the husky and the wolf, because we actually looked at it very carefully, but for most intents and purposes, these technologies are opaque. Essentially and representationally opaque.

Miller: You’ve argued to me in a provocative way that, correct me if I’m wrong, separate from whether or not these human-created computer programs, whether or not they come up with answers that we’re happy with, the fact that we cannot understand how they came up with those answers, that that is so problematic that it’s a moral or ethical issue for us as humans. Did I misrepresent your position?

Alvarado: No, you are correct. I’ve argued that in a couple of papers and I’ve been trying to make the case that that is indeed what is going on. It’s not so easily intuitive to understand that, because of course, a lot of people want to say “imagine you have a deep neural network or an AI system that is being deployed in medicine, and it can really look at somebody’s proclivity to, let’s say, cancer…

Miller: Or actually diagnose some cancer from a scan that a human eye right now is less likely to get. So that’s an easy hypothetical. And just to sharpen it, we don’t know how it got the right answer or it got the answer faster than a human would have, but it did. And so somebody got chemo and radiation before they might have. Isn’t that great?

Alvarado: Right. So, the idea here is “Ramón, your epistemic concerns, your concerns about accessibility and about understanding, are just not as important as our concerns with saving lives.” And they might be right in that. Sometimes moral concerns or ethical concerns are a little bit higher than my concerns for knowledge. And that’s the case when we’re in an emergency situation, right? For example, if I fall off my bike and I have to go to the surgery room, I’m not gonna be querying my surgeon to see whether they actually have the proper licenses and where did they go to school, do they really know what they’re doing? I’m just gonna trust them that they know what they’re doing.

But it’s not always the case that we’re in an emergency situation. It’s not always the case that medical cases are purely urgent. Sometimes I wanna go to my doctor prior to an emergency just to see what I can know about my body through them. In those situations, I really want to sort of have access to the reasoning. Why do they think I have this? What are the reasons by which they arrive at the result? Most of my doctors are able to do that. But if I am relying on a machine that is coming out with an output, neither the doctor nor the person that designed it knows exactly what it is that it took in consideration, then I will never know exactly why it arrived at a result.

Now you might say, “well, it might still save your life.” And for the short-term, that’s great. Of course, that’s good that it saves lives. But for the long-term, is that disservice to medical science, maybe not medical practice, but medical science, such that we will learn less about the conditions, we learn less about the actual patterns that bring about those conditions, et cetera.

Miller: What about non-medical scenarios, parole decisions, college admissions, HR decisions, hiring and firing? What do you see as the implications of your stance?

Alvarado: So I think the implications are sociopolitical. The idea that, for most intents and purposes, when something is being decided about our lives, we want to look at reasons why it was decided so. Why do we want to do that? Not always because we want to find errors, but sometimes because we just want to have the ability to challenge in case we don’t agree with it. If those decisions about our freedoms or those decisions about our loans or those decisions about our application for university are given through opaque methods, then we will never know exactly why the decision was arrived at. But also, we won’t ever be able to challenge it. And so we lose a little bit of agency, especially representative agency, we lose a little bit of power, especially political power, when we are faced with an unknowable kind of technology making decisions.

Miller: Are enough people who are building these algorithms thinking about what you think they should be thinking about?

Alvarado: I think they are now. And I think we’re getting at a really good point where people have been made aware, unavoidably so. For example, in my class I teach 150 students this term. Almost 90% of them are data science students. I’m a philosopher, I’m an ethicist, but they have to take this course as part of their major. So this initiative in places like the University of Oregon are the ones that are helping to bring about this consciousness that the developers ought to be aware of these possible implications.

Miller: Ramón Alvarado, thanks very much. I look forward to talking again.

Alvarado: Thank you very much for the invitation. It was a pleasure.

Miller: Ramón Alvarado is an assistant professor of philosophy at the University of Oregon and affiliate of UO’s Data Science Initiative.

Contact “Think Out Loud®”

If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983. The call-in phone number during the noon hour is 888-665-5865.

THANKS TO OUR SPONSOR:
THANKS TO OUR SPONSOR: