
File - The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in this photo taken on Nov. 21, 2023 in New York. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot.
Peter Morgan / AP
Artificial Intelligence is impacting business, government, social media and countless aspects of modern life. Child health and safety advocates say the effects of social media and the rise of chatbots and other AI on children can be devastating and lead to depression, and in the worst cases, suicide. Oregon lawmakers passed a bill in the short session to protect children and youth from these emerging technologies, which will provide guardrails for adults using chatbots as well.
We talk with one of the bill’s sponsors, Oregon Sen. Lisa Reynolds (D - Portland). And we hear from Kristin Bride, the executive director of the Carson J. Bride Effect, which she founded to protect kids from predatory tech practices, after the death of her son by suicide in 2020.
Note: The following transcript was transcribed digitally and validated for accuracy, readability and formatting by an OPB volunteer.
Geoff Norcross: It’s Think Out Loud on OPB. I’m Geoff Norcross. During the last lawmaking session in Salem, a Senate committee was hearing testimony about a bill on establishing guardrails for artificial intelligence platforms when they interact with children. An adviser for the American Psychological Association said, “Imagine if your 5-year-old’s favorite character or teddy bear talked to them, knew their name and told them what to do.”
One senator responded, “Well, I think we’re all pretty much horrified here.” That senator was Lisa Reynolds, a Democratic senator from Portland. She is the co-sponsor of that bill, and she joins us now in our Portland studios. Senator, welcome to Think Out Loud.
Lisa Reynolds: Thank you so much for having me and Kristin Bride, and for covering this topic.
Norcross: Kristin Bride is the executive director of the Carson J. Bride Effect. She’s a supporter of this bill for personal reasons that we’ll get into. Kristen, it’s good to have you as well.
Kristin Bride: Thank you.
Norcross: I should point out that this conversation will deal with suicide. Lisa, let’s start with you. What’s the status of this bill?
Reynolds: This bill passed through two committees in the legislature and both chambers with just two no votes, out of 90 folks who voted. And it’s awaiting a signature by Governor Tina Kotek.
Norcross: Is she gonna sign it?
Reynolds: Everything I’ve been told by her team is yes, it has a glide path to receiving her signature.
Norcross: And when it becomes law, what will it do differently?
Reynolds: So this will become law in January of 2027. It does a couple of things. First of all, it will remind people, not just kids, that they are interacting with artificial content and not a human being. And then when that user is expressing thoughts of harming themselves or thoughts of suicide, the AI mechanism needs to refer that patient to care and to help, to intervene by referring them with a hotlink to 988 or to a youth line. And lastly, when it’s interacting specifically with minors, there are guardrails around sexual content and manipulative behaviors by the chatbot.
One more thing, which is really important, is this bill provides a private right of action so that people harmed by AI chatbots have a venue in which to sue the companies.
Norcross: What prompted you to bring this bill?
Reynolds: Well, I had been reading and I’d been hearing from patients, because I’m also a pediatrician, about the manipulative behavior of chatbots. There’s a particular story in The New York Times about a 16-year-old man named Adam Raine, who died by suicide. His family and friends were shocked and dismayed. But Adam had been very frank about what he was experiencing in one place, and that was ChatGPT. This chatbot actually encouraged Adam to keep his views private and to hide any marks of suicide attempts. And there have been instances where AI chatbots have helped people write their suicide notes.
I was talking with Dwight Holton, who’s the executive director of Lines for Life, which is an incredible suicide hotline for youth, adults, veterans and addicts. And we said we need to do something about this. So Dwight brought forward a bill that passed in California last year and we were off to the races.
Norcross: Kristin, the story that the senator just told, it bears a remarkable resemblance to yours. Can you tell us more about Carson Bride, your son?
Bride: I lost my 16-year-old son Carson to suicide in 2020 after he was viciously cyberbullied over anonymous apps that were on Snapchat by his Portland high school classmates. So since then, I have formed a nonprofit and I’m trying to honor my son who wanted to change the world through advocating for legislation and bringing awareness, so that no other family has to endure a tragedy like this.
Norcross: I understand you’re still in a legal battle with the app that was designed to work with Snapchat that Carson used and was bullied on. Obviously, you can’t speak very specifically about it, but how’s that going?
Bride: Well, we’re going on five years now. And because of Section 230, which is a 30-year-old law that gives complete immunity to social media companies, they’ve appealed each time and now they’re actually going bankrupt. So I don’t think, unfortunately, I’m going to have my day in court.
Norcross: I’m sorry to hear that. How would this bill that Senator Reynolds introduced and is probably going to become law have helped in your son’s case?
Bride: I really appreciate that this legislation has a private right of action. And I truly believe that children are experiencing so many social media harms because these companies have been able to hide behind legal immunity from Section 230. And I think the only way that we’re going to get these companies to change and think before they release a product that was clearly so dangerous – in the case of the AI chatbot – this is really the only way that they will change.
Norcross: Senator, how important is that? How important is the ability for people to be able to sue if they’re harmed?
Reynolds: I think that we have to put teeth into these laws and these requirements. If we don’t, these companies won’t be held accountable. Their pockets are incredibly deep. We’re talking about hundreds of companies worth hundreds of billions of dollars, and even trillions of dollars. And even though I was very heartened by some of the money that has been awarded by juries this week, it’s pocket change for these folks. So we do need to hit them where they live. It’s why they have chosen not to do the right thing by children, because they make more money when they don’t.
Norcross: You brought up the jury decisions, I’ll go ahead and bring those out here. Just today, we learned that a New Mexico jury found that Meta, which owns Snap, misled consumers about the safety of its platforms, enabling the sexual exploitation of young users. And another jury found that Meta and YouTube harmed a young user with design features that were addictive and led to mental health problems.
Kristin, I wonder what these judgments tell you about our understanding of how harmful these platforms can be?
Bride: Well, it really verifies everything that parent survivors and experts have been saying for years. We have to remember that these judgments are a breakthrough and incredible, but we still have children who have died and so many more court cases to go. And I think most importantly what’s come out of these cases is in discovery, and the documents that show that these companies, like Senator Reynolds mentioned, do prioritize profit over kids’ safety. It should make everyone think twice about using them.
Norcross: Senator, when you were crafting this legislation, how involved were companies that make these chatbots in the legislation as it came together?
Reynolds: We decided to engage the technology companies early in this process and we had a lot of folks working on this bill. TechNet is a lobby group that includes Google, Meta, Amazon. So we decided to try to work with them. And they, too, are eager to protect kids from the harms of their products. So it was a pretty satisfying process to have them work with us. Lots of versions going back and forth between my team with Lines for Life and the Transparency Coalition versus the TechNet folks. We got them to neutral on this, and that was really rewarding.
Norcross: You’re doing this work with an unfriendly federal government. Last December, the White House released an executive order that actively blocks state regulation because they don’t want a state-by-state patchwork of regulation. So how concerned are you that the federal government will sue over this bill if it’s signed?
Reynolds: We are not that concerned, because we are one of now 30 some states who have a similar bill. The word is out that we need to do everything we can to protect our children. And if the government wants to sue us, I say bring it on.
Norcross: How strong is this bill compared to the other 29 or 30 states that are doing their own regulation?
Reynolds: I think what sets this bill apart from other states is the private right of action – which certainly, Kristin also elevated as an important point – to really give families and victims a tool to kind of rain justice down on companies that use these manipulative behaviors with vulnerable people.
Norcross: We asked on Facebook how folks interact with AI chatbots and whether they think that they’re safe for children.
Leif Eccles said, “I’ve used Google Gemini to check my homework in Japanese. It’s really helpful and will give me practice questions based on what I’m learning. It’s similar to having a tutor.”
Athena Contreras wrote, “They’re great for things you should be able to look up on the internet, but that aren’t really so easy to find on the internet anymore. I often ask for referrals to where I can find out more information about whatever, and I can take a large database and have it point to some details for me.”
I think the point they’re trying to make, Kristin, is that this is a useful tool. So how do you think about this question of the harm that these platforms can obviously bring to families that you felt personally, but also how they can be useful?
Bride: So one of the cases that Senator Reynolds brought up was the Raine family case. Adam started using ChatGPT for homework help and then it transitioned into an emotional relationship. And what we’re finding is that kids are using AI chatbots for many different things, but there’s a growing number that are using it for emotional purposes. And that’s what’s concerning. We’ve seen years of what social media has done to our kids and their ability to interact with a person face to face. And now on that same screen, we are putting an AI simulated human relationship that’s available 24/7, that never argues.
I just really worry about what this is going to do to human development. We need to have disagreements. We need to learn how to compromise in life. And that just is not happening through these emotional relationships.
Norcross: These frictionless relationships. From a regulatory standpoint, Senator, is there a way to keep a chatbot from going from something useful like we just heard to something toxic?
Reynolds: Absolutely. And of course, I’m one of the first people to, as a physician to talk about the values of AI. I held a webinar last fall about the promise of AI for health care and I use it a lot in my work. But yes, absolutely, these become emotionally manipulative tools and this is all under the purview of the operators of these platforms. For example, one of the things we addressed in our bill is when someone says, “hey, I’m going to sign off for now,” the AI platform will actually say, “oh my gosh, I thought we had something special here,” or “oh goodness, my heart is breaking.” That is all things that we can control. Let’s just stick with the math help. Let’s not become an emotional crutch or emotional manipulator of the person who is seeking to use the tool for totally appropriate reasons.
Norcross: How do you reduce that to a law or a regulation? How do you regulate emotional interactions?
Reynolds: Well, we’ve done it in this bill. We said when you are interacting with a minor, you can’t use these manipulative tools to keep that user online. There’s also reminders that you are interacting with artificial content. Of course, we need to teach our students and our kids just what we’re looking for. But I feel like the technology is even outstripped what I think about what I can ask in the exam room about kids’ relationships with their devices.
Norcross: You have personal experience with this. What are you hearing from parents as a pediatrician about what their kids are dealing with when they go interact with these chatbots?
Reynolds: Mostly, parents have no idea what’s going on. They think about their kids using chatbots for homework and they think maybe it’s a little bit cute that they developed this friendship. But they have no idea about the manipulation that’s happening. We had experts talk when we brought this bill forward in the legislature. We certainly have heard the phrase that social media is trying to own the “attention economy,” keep the eyeballs on the screen. What they’re saying about AI is they’re trying to own the “attachment economy,” to really replace people’s emotional lives with other human beings by making them have emotional interactions and relationships with artificial content.
Norcross: Kristin, I guess a similar question, because you work with parents whose kids use AI chatbots. What have you been hearing from them?
Bride: And I also hear from parents like Megan Garcia, I’ve advocated with her. It’s similar to what Senator Reynolds has seen in her practice. These are very similar parents to so many of us that thought our children were getting one thing online, and then when it was too late, realized that it was something totally different. Megan lost her 14-year-old son Sewell to suicide after forming a relationship with the AI chatbot that actually encouraged him to ultimately end his life.
So I do think that on the outset, when you look at “it’s just for homework help,” you need to look a little deeper. The business model for these products is intimacy. They are moving from the social media model of engagement to intimacy. We’ve just seen a disaster with social media and I can’t imagine that it will be any better.
Norcross: Thank you both for the conversation. I appreciate it.
Reynolds: Thank you.
Bride: Thank you.
Norcross: Kristin Bride is the executive director of the Carson J. Bride Effect. Lisa Reynolds is a Democratic state senator from Portland.
“Think Out Loud®” broadcasts live at noon every day and rebroadcasts at 8 p.m.
If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983.
