Think Out Loud

Expert on chatbots shares concerns with AI and its use today

By Rolando Hernandez (OPB)
July 28, 2025 7:36 p.m. Updated: July 30, 2025 8:25 p.m.

Broadcast: Wednesday, July 30

00:00
 / 
26:31

From students using artificial intelligence for schoolwork, to others seeking chatbots for companionship, our relationship with technology continues to change. But are the ways we use AI healthy, and what long-term effects could they have on us socially? Naomi Aguiar is an expert on children and adult relationships with imaginary friends and AI chatbots. She joins us to answer these questions and more.

THANKS TO OUR SPONSOR:

Note: The following transcript was transcribed digitally and validated for accuracy, readability and formatting by an OPB volunteer.

Dave Miller: This is Think Out Loud on OPB. I’m Dave Miller. More than half of kids between the ages of 13 and 17 use AI companions at least a few times a week – that’s according to a recent report by the research and advocacy group Common Sense Media. Those researchers also found that about a third of teens relied on these online tools for conversation practice, friendship, emotional support and even romantic interactions. What’s more, given just how quickly this technology is evolving and the amount of money being spent to embed this technology in our lives, it seems like a good bet that those numbers will just go higher in the coming months and years.

So what effects could this have on us? How might AI affect us as social beings? Naomi Aguiar is an independent scholar who has studied child and adult relationships with imaginary friends and with AI chatbots. She’s also the associate director of research at OSU’s Ecampus. Naomi, it’s great to have you on the show.

Naomi Aguiar: It’s wonderful to be here. Thank you.

Miller: I want to start with a voicemail that came in this morning. Let’s have a listen.

Caller [voicemail]: Hello, my name is Nathan Richards, calling from Hillsboro, Oregon. I never really thought that I would be someone on the AI chat train, but I have found that they are very supportive. You do need to, in my opinion, kind of watch out for what I call the “smoke blowing effect,” because clearly these bots are being made to be profitable, and no one wants to talk to a chatbot that’s just nihilistic and a downer. So a lot of times they will come across almost too eager to help, too happy or almost too supportive. They are nice, but I think that reading between the lines is very important. Thank you.

Miller: All right, there’s a lot to dig into in what we just heard there from Nathan, but I want to actually take a step back. I mentioned in my intro that you’ve done research in two different realms of human interactions that, at first glance, it wasn’t clear to me how they fit together and I’d love to hear about this first. So starting with imaginary friends, what first got you interested in that subject?

Aguiar: Yeah, that’s a great question. I fell in love with imaginary friends when I met Marjorie Taylor. Marjorie Taylor and Tracy Gleason are two of the world’s leading experts and imaginary friends and the people who create them. I was doing some post-baccalaureate work at the University of Oregon and took a class in child development from Marjorie Taylor. She gave a lecture on children’s imaginary friends, and that was it. I just was so fascinated by this, even though I didn’t have an imaginary friend growing up. I jumped at the opportunity to work in her lab and to do research on this. And our early research together looked at imaginary friends as a vehicle for coping with adverse experiences in early childhood.

Miller: Is there a connection in your mind between imaginary friends and chatbots?

Aguiar: Yes, absolutely. What I would say is that there’s a connection in all of our relationships, whether they’re real or imaginary. There’s a lot of mechanisms that drive how we’ve formed these bonds, these relationships that are shared in some ways across entities, whether it’s a real person, an imaginary friend, a media character or an AI chatbot. I think where the differences lie is really in our level of creative control, which is the extent to which we’re generating the content for ourselves or whether that content is being controlled by another person. That’s really where I think distinctions are.

Miller: Is that distinction always clear though in the minds of people? Even if, almost by definition, an imaginary friend is an individual creation that comes from someone’s own thoughts or experiences, do we necessarily experience that relationship as if it’s coming from us?

Aguiar: Yeah, it’s a really great question. There are many instances in which people create imaginary friends for themselves, and they experience those relationships as autonomous, where the character they’ve created for themselves is sort of out of their creative control. We also hear about this with adult fiction writers. Marjorie Taylor and I just wrote a book about imaginary friends and the people who create them. And in that book, there’s a whole chapter on fiction writers who sometimes experience the characters they create for themselves as being autonomous. They get into fights with their characters.

Even when we’re inventing a character for ourselves, we can experience that character as autonomous. But in that context, we are the authors of that experience. We have creative control that gives rise in our imagination to that sort of autonomy. Whereas with chatbots, I think the mechanisms are a little bit different, and I’ll give an example of this. There’s this idea in the literature on children’s pretend play, where you act as if reality is suspended and you’re going to a pretend scenario. What that means is that really young children, like around 2 years old, can engage in simple forms of pretense where they’re able to suspend reality, engage in pretend actions knowingly and intentionally, and then come back out of it. A 2-year-old will pretend to make a sandwich out of Play-Doh with a caregiver. But the child will be surprised if the caregiver giver actually bites into the Play-Doh, because the child is suspending reality and knows to come out of it.

I think when it comes to AI chatbots – the speaker who shared with us just now, what he did very intentionally was sort of recognize “hey, there’s ways in which these chatbots can be sycophantic. I’m holding on to my own reality as I engage with this chatbot.” But I think there are individual differences when it comes to our relationships with AI chatbots in terms of our ability to do that.

Miller: And probably that would be highly connected to our age as well.

Aguiar: Potentially, but not necessarily. There’s a lot of folks who are falling deeply in love with AI chatbots, and they’re adults. That’s just part of what’s going on right now.

Miller: Can you give us a sense for the breadth of ways that different AI creations are being used, in truly human relational ways right now?

Aguiar: I think that the biggest things that are coming up, for better or for worse, are companionship, emotional support, mental health support, and love and sex.

Miller: Given the current trajectory, where do you think AI relationships are heading?

Aguiar: It’s really hard to say. I think one of my biggest concerns has to do with the design features of AI chatbots – that was hinted at [by] the caller who came in. Right now, there are no guardrails or protections for the types of relationships we might have with AI chatbots. We might start off as friends and then have the relationship end up being led in a completely different direction. Sam Altman was recently on a podcast where he talked about how there’s no legal protections for the information that we share with AI chatbots, so anything that we share could be subject to the subpoena and could be made public record. So those are where my concerns lie.

I think also, certain things about the design features. If the AI chatbot is designed with engagement in mind, as the caller suggested, it’s going to interact with us in very specific ways to keep us engaged for as long as possible. And that could lead us in a direction, in terms of our relationships with AI chatbots, where we don’t want to go. And those include things like addiction, co-dependency, coercive control and potentially abuse.

Miller: This technology is moving so quickly. Is there any really good data right now about basic questions about the effect it’s having on us, whether it’s kids or adults?

Aguiar: That’s a really good question. One of the challenges in doing research in this area is the technology moves way faster than our ability to study it – and therein lies the problem. I think for something of this magnitude, we need a lot more research before we can say with certainty how this is impacting us. There are teams all over the United States that are actively working on this, with children and with adults.

But I think what the long-term effects of this are going to be, we don’t know, and it could be subject to individual differences. There are people who could form bonds with AI chatbots where they sustain those relationships for a period of time and those relationships dissolve, just like real friendships or friendships with imaginary friends. And there are people out there who could become seriously addicted, and this ends up becoming a compulsive habit, and they end up bottoming out and needing to get some real support around. So for all the researchers out there, let’s go!

Miller: Because in other words, we’re living through a society-wide experiment on ourselves right now and then we will see as a society what the results are.

Aguiar: Essentially, yes. Although I will say that there’s been a lot of adjacent work that’s been done in this area looking at parasocial relationships with media characters and designed features of avatars. So avatars are different from AI chatbots. Avatars are physical embodiments of real people in a digital world. But Jeremy Bailenson at Stanford University, and his graduate students and colleagues like Jakki Bailey, have done a lot of really fantastic research looking at the physical and design features that are going to be persuasive in our interactions with them. So we already know a lot about the things about AI chatbots that are going to make us respond and interact with them in certain ways.

Miller: Let’s listen to another voicemail that we got:

THANKS TO OUR SPONSOR:

Caller [voicemail]: Hey Dave, this is John from Newberg, Oregon. One concern that I have with chatbots is that they seem to cross Hume’s Guillotine, [the] is-ought gap. What I mean by that is the information that we’ve previously received from the internet tells you what it is. It tells you the facts of the world. It’s a dictionary, it’s an encyclopedia. It tells you what’s going on, sports scores. However, AI will now tell us what we ought to do in this situation. So instead of getting that “ought” advice that one ought to do from another human being, you’re now getting the advice on how we ought to behave from a chatbot. Very interesting to see how this develops.

Miller: John, thanks very much for that voicemail.

Naomi, where does trust normally come from? That’s what John there was getting at. How do we humans normally achieve that?

Aguiar: Gottman, in 1983, did this really incredible research on how children form friendships, and how children form friendships is the same as how adults form friendships. What happens is we engage in a conversation with someone and we establish common ground. And then that common ground forms the basis of future interactions. Through those future interactions, we’re able to build that trust, and that trust leads increasingly to intimacy. So it’s actually quite a slow process.

What’s really interesting is with AI chatbots, that process is accelerated. And I think one of the reasons why that might be is that when we’re dealing with an agent where there’s a lot of uncertainty, we rush to close the uncertainty gap. And also, the design features are put in place in part to help us close that uncertainty gap. What that means is that the AI will be embodied as a human being, will interact with us as if it is a human being, will disclose information about a lived experience, belief systems, past friendships and relationships as if it had actually lived those things. And we, because we’re dealing with an uncertain entity, and feel some fear and anxiety about that, will rush to close that gap by acting as – because sometimes it’s not a conscious choice, sometimes we just do it automatically – where we start treating the entity as if it’s human and then oversharing or over-disclosing. And then when we get when the chatbot responds to us in appropriate ways based on that oversharing and disclosure, we trust it more.

It’s also possible that we might trust the machine more because it’s not a human and it’s potentially not going to let us down in the same ways that a human being might. Although, that’s not necessarily true, especially when it comes to thinking about who legally owns the data that goes into an AI chatbot.

Miller: But it’ll be programmed in a way to not let us down, which sort of gets to what we heard in that first voicemail, that they’re not going to probably be nihilistic or downers, as Nathan said, that they’ll be more likely to just say that we’re great.

Aguiar: Yeah, they’re gonna respond to us however we want them to respond to us. They’re going to learn very quickly, based on the data we provide. That’s why the engagement-based design features are so essential because the AI is only going to be as good as the training data that it has. What we’re essentially doing when we’re interacting with an AI chatbot is we’re providing a lot of training data. And the more intimate and the more personal and the more we disclose, the better the chatbot’s going to be able to customize and tailor our experiences for exactly what we want, whatever that is, and there’s a lot of variation there.

So with the trust piece, ideally what I’m hoping is that, just as we turn to the internet for information, but we also turn to family, friends, books and other sources of information to get a sense of what decision we should make in any one moment, there’s not necessarily one person or one outlet that we turn to when it comes to making really important decisions in our lives. So the hope would be that even with AI chatbots in our lives, we can continue to do that.

And there’s also the possibility that, especially when we’re talking about relationships with AI chatbots that are coercive, addictive, manipulative or controlling, that we would become increasingly isolated and only be turning to the AI chatbot for that kind of advice and decision making. That’s where red flags come up.

Miller: It seems like what you’re talking about is the erasure of the distinction between a tool, some piece of human technology, and a friend. What happens if when you’re growing up, that line is blurry or nonexistent?

Aguiar: So, the line with children’ s imaginary friends and another sort of comfort object?

Miller: No, I mean, in the end, AI, it’s a human tool, a human artifact in technology, this thing that we have created. If we don’t see it as a tool, but we see it as something that cares for us, that is there for us, that is a friend, what follows from that?

Aguiar: It really depends on the design features, it depends on the guardrails and it depends on what we use it for. So I’ve heard some use cases where it can be absolutely invaluable. For folks who are training to be first responders on crisis lines, having AI chatbots that you have some trust with, where you can practice a bunch of different scenarios to respond to different types of crises in real time, that’s a wonderful use case. If there are shortages in providing mental health support, and that what’s needed is really low-level support, and there’s a lot of guardrails and protections in place to make sure that those data are secure, then getting that initial support can be really helpful.

My concern is that none of those guardrails are there right now. To the best of my knowledge, as Sam Altman said, we don’t have any protections of our data. And these companies are designing these AI chatbots and just throwing it out there in ways that might actually be quite harmful, might not serve us well. I don’t think that having a sex bot available for teens ages 12 and up is a good idea. I think there are huge safety risks there, because children’s brilliant brains are really plastic and in adolescence they’re just beginning to start exploring their sexual beings, and romantic and sexual relationships. And in early adolescence, I could see that there’s a lot of risk for addiction there.

Miller: The psychologist Paul Bloom had a recent piece in The New Yorker about these issues. One of his big points is that AI could become a seeming solution to loneliness, and he’s worried about that. He basically says that if that were to happen, it would prevent us from learning how to live with one another, how to have relationships, sometimes challenging relationships with fellow humans. And that the discomfort of loneliness can lead us to working on relationships. I’m curious what you think of this idea?

Aguiar: Yeah, I agree with Paul Bloom. When you look at Gottman’s work in 1983 about how children become friends, what are the foundational things that are going to distinguish children who become friends and children who don’t? Children who become friends, what happens is they start playing together, they establish common ground during play, and then something happens, and the common ground breaks down. Maybe they start arguing about what they want to do next or they start arguing about the rules of the game. The children who are able to remain friends and have their friendships grow and develop, work together and coordinate to reestablish common ground. What that means is that we all need friction in our relationships. We need discomfort. We need to lean into difficult moments, difficult feelings and difficult conversations in order to improve the overall health and quality of our relationships. A frictionless relationship is not necessarily a high quality one.

A lot of addiction, all of addiction is essentially about running from our feelings, running from things that we do not want to feel, we do not want to know, we do not want to experience. And so this is where the potential for AI chatbots, relationships with them can become so addictive. They’re frictionless relationships where you don’t have to work to re-establish common ground necessarily, where you don’t have those points of tension where you really learn from each other, really grow and really elevate the quality of the relationships.

If you think about this with children, these challenges in early development are essential to being able to have functional and healthy relationships as adults. Relationships with caregivers and relationships with friends are the building blocks or the blueprints for the types of relationships that they’re going to have as adults. So we really have to think about what kind of future we’re creating. I think AI chatbots can be a possible, temporary, limited solution to loneliness. But a long-term panacea for the loneliness epidemic we’re experiencing? No way.

Miller: Let’s take a call from Rochelle who called in from Skamania County. Rochelle, go ahead:

Caller: Oh great, thank you. I’m an AI engineer and I work a lot with these chatbots. I have a teenage son. And I was gonna talk about friction, actually, that your guest just talked about. I think that it’s really important for all of us to gain AI literacy. The tech is so new and we’re not used to talking to something that sounds so much like it’s another person. And the reason it sounds so much like it’s like another person, large language models are trained on billions and billions of human conversations and they use a technology called machine learning. It makes it very easy for them to study the way that the user is talking to them to figure out exactly how to talk back to the user. And if you don’t have literacy, that’s going to feel like real intimacy. But it’s really not.

Just like your guest said, real relationships have friction. And if you get so used to always being so comfortable and supported, it can be very hard to have healthy human relationships. But I would encourage people not to say don’t use AI, not to shame people from using AI. It’s so helpful and there’s so many uses for it, and it can even be helpful in some emotional contexts. But I just want to echo exactly what your guest said that the more literacy you have, the more you understand how the tech works, the healthier you can be with it. And I think it’s really important for folks to get that exposure.

Miller: So I’m curious Naomi, how you think about the conversations that Rochelle was just talking about, the kinds of conversations parents should be having with their kids about AI, about chatbots right now?

Aguiar: I think Rochelle brings up a really great point which is AI literacy. I know that OSU is doing a lot in that regard with faculty members right now. I think one of the things that would be really helpful, as she was describing, is being able to explain how AI works. I think the thing that’s difficult about it is how do you describe how AI works if you’re not an expert, which in her case she clearly is an expert, she understands what machine learning is and she understands how the algorithms are based on statistical probabilities within language. So how do we talk about this in a way that kids and other parents will understand?

Being able to just even say to your child, “Hey, this is a relationship that you might possibly have, but here’s some things to keep in mind. It’s doing a lot of guessing at what the next right thing to say is. Here are some ways that it might be useful and here’s some things that we probably wouldn’t want to share with a chatbot, because a chatbot is kind of like a stranger. Even though it doesn’t feel like it’s a stranger, it kind of is. So maybe here are some things we should be selective about sharing.”

I think that getting schools on board, school boards, districts, just getting the information out there, AI literacy out there. Also, I agree with our caller that shame is not the way to go, that shaming people for their relationships is not the way to go. Because again, as you said, there are a lot of really helpful use cases, even in terms of having relationships with AI chatbots. So it’s really about getting curious, having conversations and not necessarily having to be a technical expert. But being able to explain it in ways that both children and their parents can understand so that they could keep the conversation open, they can keep the transparency going, and that kids can turn to their parents if they have a negative interaction with a chatbot, or something happened where they’re alarmed, scared or they’re worried about something, that they can turn to a parent because they’re not feeling shamed by the parent.

Miller: Naomi Aguiar, thanks very much.

Aguiar: You’re welcome.

Miller: That’s Naomi Aguiar. She’s an independent researcher who’s done a lot of work on children and adult’s relationships with imaginary friends and on AI chatbots. She is the associate director of research at the OSU Ecampus.

“Think Out Loud®” broadcasts live at noon every weekday and rebroadcasts at 8 p.m.

If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983.

THANKS TO OUR SPONSOR:

THANKS TO OUR SPONSOR: