Think Out Loud

People with motor impairments help develop robotic feeding assistant at University of Washington

By Sheraz Sadiq (OPB)
March 31, 2025 1 p.m. Updated: April 7, 2025 9:27 p.m.

Broadcast: Monday, March 31

Jonathan Ko is a Seattle attorney with a spinal cord injury. He is also collaborating with robotics researchers at the University of Washington develop a robotic arm that can assist people with motor impairments feed themselves. He brought the Assistive Dexterous Arm into his home in Seattle in Sept. 2024 for a week to feed him meals such as a breakfast of avocado toast, which he is shown doing in this photo, while controlling the device with a mouth-operated joystick.

Jonathan Ko is a Seattle attorney with a spinal cord injury. He is also collaborating with robotics researchers at the University of Washington develop a robotic arm that can assist people with motor impairments feed themselves. He brought the Assistive Dexterous Arm into his home in Seattle in Sept. 2024 for a week to feed him meals such as a breakfast of avocado toast, which he is shown doing in this photo, while controlling the device with a mouth-operated joystick.

Taylor Kessler Faulkner / Personal Robotics Lab, University of Washington

00:00
 / 
34:29
THANKS TO OUR SPONSOR:

For about 10 years, researchers at the University of Washington’s Personal Robotics Lab have been developing a robotic arm that can help people with motor impairments, such as quadriplegics, feed themselves. That’s a task they may rely on human caregivers to do. The Assistive Dexterous Arm can be mounted onto a surface such as a power wheelchair or hospital table. With vision and touch sensors, ADA can determine how to best grasp and maneuver a bite of chicken or watermelon, for example, toward a user’s mouth.

The lived experiences of people with disabilities are often ignored in the development of new technologies that could benefit them, according to Amal Nanavati, a recent PhD graduate from the UW’s Paul G. Allen School of Computer Science & Engineering. But that isn’t the case with the ADA project. Dozens of people with motor impairments have provided feedback and guidance on it over the years, and some have even taken on the role of “community researchers” working alongside the UW robotics team.

Jonathan Ko is a Seattle attorney with a spinal cord injury. He is also collaborating with robotics researchers at the University of Washington develop a robotic arm that can assist people with motor impairments feed themselves. He brought the Assistive Dexterous Arm into his home in Seattle in Sept. 2024 for a week to feed him bites of food in a variety of different conditions, such as a snack of fruit, which he is shown doing in this photo, while reclining in bed.

Jonathan Ko is a Seattle attorney with a spinal cord injury. He is also collaborating with robotics researchers at the University of Washington develop a robotic arm that can assist people with motor impairments feed themselves. He brought the Assistive Dexterous Arm into his home in Seattle in Sept. 2024 for a week to feed him bites of food in a variety of different conditions, such as a snack of fruit, which he is shown doing in this photo, while reclining in bed.

Taylor Kessler Faulkner / Personal Robotics Lab, University of Washington

Jonathan Ko is a Seattle-based patent attorney and ADA community researcher who brought the device home to feed himself meals for a week. He and Nanavati are authors on a recently published paper describing this real-world testing of the technology. They join us to talk about what they learned and share their thoughts on the future of robot-assisted caregiving.

Note: The following transcript was transcribed digitally and validated for accuracy, readability and formatting by an OPB volunteer.

Dave Miller: This is Think Out Loud on OPB. I’m Dave Miller. People with spinal cord injuries and other major motor impairments often rely on caregivers to feed themselves, but for the last 10 years or so, researchers at the University of Washington’s Personal Robotics Lab have been developing a robotic arm that can help instead. When combined with vision and touch sensors, the Assistive Dexterous Arm, or ADA, can determine how to best skewer a bite of chicken or a piece of watermelon, and then maneuver it towards a user’s mouth.

Amal Nanavati is part of the team that’s doing this work. He’s a recent PhD graduate from UW’s Paul G. Allen School of Computer Science & Engineering. But it’s not just traditional researchers who are part of this team. They’ve been working with community researchers as well, people with lived experience of paralysis and other mobility issues. Jonathan Ko is one of them. He is a Seattle-based patent attorney who brought the device home to feed himself meals for a week.

He and Nanavati are authors on a recently published paper, and they both join us now. It’s great to have both of you on Think Out Loud.

Amal Nanavati: Thank you for having us.

Jonathan Ko: Thank you.

Miller: Jonathan, first – what’s at stake, broadly, in this project? I mean, why is it helpful to have a robot assist in feeding?

Ko: I guess I’d start with saying a little bit about my background. I’m a C2 quadriplegic, which means that I’m paralyzed from the neck down. I’m lucky that I can actually breathe on my own. Typically, people with the injury that I have need to be assisted with breathing with a ventilator. We can’t do a whole lot on our own, physically that is.

I actually can’t feed myself on my own, so having a device that can feed me would enable me to eat when I want to and what I want to, as opposed to having to ask someone else for help. I think it would be … I’d be able to eat on my own. And if I have a machine that can help me, or a robotic arm that can enable me to eat, it would help my caregivers out if they were busy or if I didn’t have a caregiver to do this task.

Miller: How do you manage meals right now with those caregivers, with the necessity of human caregivers?

What’s the schedule like and what’s the process like?

Ko: It’s busier than you would probably imagine. I don’t know, I guess there are parents out there that have to feed their children, so it’s similar to that. There’s a schedule and you need to make time for it, and in my case, I need to arrange the caregiving. I also have to find a way to employ somebody to do so. So, it can be a lot. I remember when I first moved to Washington, I didn’t have full-time caregiving at the time, so it would actually mean I wouldn’t know if a meal was coming or not.

Miller: Because you’re completely beholden to the person who’s coming, and if they weren’t there, you didn’t know if you would eat.

Ko: Yeah, exactly.

Miller: Amal, you’ve talked to a number of other people who could use this technology. I’m curious what themes came up, in terms of what this would mean? We’ll get to all the details of how it works and the engineering questions, but in terms of the personal ones, I’m curious what other stories you’ve heard?

Nanavati: Of course, that’s a great question. First of all, in terms of the numbers, at least 1.8 million Americans need assistance eating. And these are numbers actually from 2010, so, presumably it might have gone up, given the population increases as well.

Of the people we’ve talked to, some themes that tend to arise are feelings of self-consciousness. For example, when you’re eating with friends and there’s a conversation going on, you don’t want to need to keep interrupting the conversation to tell your caregiver to feed you a bite. And we’ve talked to multiple users who, because of that feeling of self-consciousness or embarrassment at interrupting the conversation, I often just opt not to eat or opt to barely eat at all in social settings, to avoid that.

We’ve talked to people who can feel a sense of pressure from their caregivers, like maybe they want to eat at a particular rate, but the caregivers are feeding them a lot faster and they just have to deal with it essentially, because that’s what is necessary or that’s what ends up feeling more comfortable in the moment than continuing to push back against one’s caregiver.

Now, to be clear, it is certainly not all negative experiences. We’ve also talked to people who feel mealtimes can be moments of real connection with your caregivers and with your community, so the goal of this project is really to give people the option to be able to feed themselves independently using this robotic aid, at times when they would like to.

Miller: So what is this Assistive Dextrous Arm, or ADA?

Nanavati: ADA consists of a robotic arm that attaches to a power wheelchair or other surface. Now, the base robot arm itself is actually sold commercially by a company in Canada, Kinova. It’s the Kinova Jaco Arm, and the commercial version is meant to be an aid for people who are in wheelchairs. In the commercial form, though, every single motion of the robot arm needs to be directly controlled, directly commanded by the user. That’s something that we in robotics call teleoperation.

We’ve talked to users who’ve tried to use this arm to feed themselves, and it takes them anywhere from five to 45 minutes to feed themselves a single bite.

Miller: Because you have to say, “Move this 2 inches forward, then 1 inch down, and then an inch diagonally and then a quarter inch this way.” Every single tiny micro-movement, which you wouldn’t normally think of if – it’s happening without too much thought – it all has to be specified.

Nanavati: Exactly, every single micro-movement, including also the angle of rotation of the fork. The other thing you have to keep in mind is the vantage point of the user.

If the user is sitting in a particular position, the plate is somewhere farther from them, so they’re seeing it from a particular angle. It can also be quite easy to miss it, because of how your depth perception is, given where you are and the plate is. As you mentioned, the challenge is really the fine-grained manipulation skills necessary to actually acquire food and feed yourself.

Miller: So, that’s the off-the-rack version of this assistive robotic arm. What has your team added on?

Nanavati: We are working on enabling the robot to autonomously pick up food items and feed it to the user, essentially doing it by itself. In order to do that, we have added on a couple of sensors. Specifically, there’s a camera in the wrist that it uses both to see the food items, to see the color of the food item, reflections, etc., different visual elements, as well as to see how far away the food item is. Very similar to the types of information that, for example, our eyes give us.

Then, we’ve developed a custom fork that the robot gripper can hold onto, and in that fork, there’s a force torque sensor. This measures the forces that the fork is experiencing as the robot is, for example, acquiring food. So it can know, has the robot skewered the food? Am I putting the right amount of pressure for this food item? Something like a cherry tomato versus a carrot, you want to put very different amounts of pressure, because you don’t want to squash the cherry tomato, but you want to make sure to pierce the carrot. Those are some of the sensors we’ve added.

We’ve done a lot of work on the algorithms, in terms of, how should the robot actually move to pick up food items? Something as subtle as what the angle of the fork is can really impact it. If you take something like a grape, the fork tines actually need to be perfectly vertical relative to the grape as you’re pushing down, otherwise the grape might just roll away.

Miller: Just to be clear, there’s no second arm that’s holding a knife or spoon to wedge it in place. It’s one arm, with one fork, going down onto one grape, in your example.

Nanavati: Yes, you are exactly correct. As of now, our system consists of one arm. So, with the grape, your tines need to be perfectly vertical relative to the grape, otherwise it’ll roll away. But if you were to try that same strategy on a cut piece of banana, for example, with the tines vertical, as soon as you start lifting the fork off of the plate, the banana piece will just slide off, because that has a very different texture to it. So you need to approach the banana from a very different angle in order to successfully pick it up. These subtle nuances matter a lot, and we spent years developing the algorithms for that as well.

Miller: Does the system at this point know that … let’s say that there’s a fruit salad and there are all those different cut up pieces of fruit there. Does it know that, this is a banana, this is a grape, this is a strawberry, or would it have to be told that?

Nanavati: As of now, it doesn’t explicitly know that, and that’s partly based on conversations with our users, where multiple users told us they want the level of control where they can specify the exact bite the system picks up for them. To be clear, preferences vary across users. Some users would want to say, “Hey, just give me a bite of pizza.” And they don’t care what bite of pizza they get. But we decided to start with this system where users specify the bite.

With that said, I will add that the robot implicitly learns over time some of these aspects. We have a learning system where, if the robot fails to acquire a piece of food item, it then updates its belief about what sort of motion would be useful for a food item that looks like that and feels like that. And then it can have a better guess of how to pick it up the next time. Implicitly, it is learning certain correlations between visual aspects and haptic touch aspects of the food, but it doesn’t explicitly have the label like, “this is a banana, this is a grape” at this moment.

Miller: Jonathan Ko had this robotic device feed him 10 different meals – breakfasts and dinners – at his own home. Jonathan, how did it go?

Ko: I thought it was great actually, the experience also taught me a little bit. I was actually hoping that this robotic arm would be able to be used while doing other activities. I’m maybe a workaholic and I was hoping to be able to eat while doing work, thinking robotic arms and automation – these things make work generally more efficient – and I had to change my point of view a little bit. I think it was a good thing, because I think my view is a little skewed, and I think when people eat, they should really be eating and enjoying their food. I think my view of, “oh, everything is just gonna become super efficient,” is kind of unrealistic and probably not a healthy point of view.

THANKS TO OUR SPONSOR:

Miller: I mean, what you’re asking for is something that people who don’t have paralysis or other mobility issues may take for granted. So it’s not like you’re asking for something that’s unheard of or unusual in the rest of society. What were the challenges that you did run into? When did the system struggle?

Ko: Well, one of the issues that came up was, the system needs to recognize your face, and in particular, it needs to recognize your mouth, so it can bring the food towards your mouth. And in different environments the lighting is different. I have, for example, a cup that has a face on it. So recognizing which face to bring the food to was an issue.

Miller: Did the robot try to feed your mug?

Ko: It came close. When it starts to do wild things, one of the innovations is like safety features, so we have a stop switch that I would hit. And if I was doing something I would consider crazy, we would just hit the kill switch.

Miller: And how would you hit that switch? As you described earlier, you’re paralyzed from this spinal cord injury, from the neck down. So, are you able to stop it yourself?

Ko: Yes, we have a lot of adaptive switches, and in fact I have a number of devices and switches around my head. The one that I actually chose to use to control the robot is a QuadJoy, which is like a joystick that one sips and puffs on to click. I also had an assistive switch mounted to either side of my head, where I could hit the switch and it would effectively stop the robot.

Miller: We were talking earlier about bananas, cherry tomatoes and other bits of food. Is there a particular dish that you would be most excited to be able to have this robot feed you?

Ko: Yes, and I think this one would be a challenge. I don’t think you can do this yet, but I find it amusing because it would be ramen – because it is basically a hot, soupy thing with noodles in it, which would be very difficult to do. So I don’t think it’s there yet.

Miller:  Amal, from the engineer’s perspective, what would it take to be able to have a robot effectively, efficiently, neatly feed someone ramen?

Nanavati: That’s a great question. As you know, we started by focusing on a fork because lots of different food items could be handled by that. One thing Jonathan had mentioned is he really likes pizza, and the robot fed him a dinner of pizza, for example. But with ramen, first of all, we’d need multiple tools. Right now our robot only uses a fork, we’d need to not only add a spoon, but enable the robot to switch between these different tools, which is a really interesting technical challenge for research to work on.

Another really challenging thing about ramen is that not all of the food is visible to the robot from the top of the bowl. Noodles, veggies, pieces of meat, etc. can all be below the broth. The robot would need to require a lot more on its force torque sensor to really feel the food and use that information to actually acquire the food. Of course, with ramen, drips would also be a particularly big issue. So we’d need to think through how we can potentially create a guard under the utensil of the fork. You could think of a bib for the fork type of thing, or a bib for the robot that helps prevent some of those drips.

Those are some of the challenges that come to the top of my mind, but this is something that is a really active area of research and I’d love it for the next time we bring the robot to Jonathan’s house, for it to be able to feed him ramen.

Ko: I’d also like to mention that they are really innovative. This team is incredible. One of their members, I think Ray, was working on a compliant sport-type device which might be able to do this, and also add safety features where it’s not going to hurt somebody if it’s a device that gets put inside somebody’s mouth … a utensil.

Miller: Amal, I’m curious … I mentioned that Jonathan is one of the community researchers that you and your team have been working with, and also listed as one of the authors on this recent paper. How common is it for community researchers to be this involved in a study?

Nanavati: It is relatively uncommon as of now, but that is changing, which is a very welcome development. A couple of years ago, some of my collaborators and I did a systematic survey of papers on physically-assisted robots for people with disabilities. About half of them had no involvement whatsoever from anyone with a disability, anyone with a disability that they were focusing on. No community members were represented at all in the research, which is clearly a huge shortcoming.

The other half had lots of people involved as what we would call standard participants. For example, you might have already developed the technology and then you involve people with motor impairments just to evaluate the technology at the end. The other side is the formative study, where you involve the people at the first stage to understand what their needs and priorities are, but then it’s just the engineers making the decisions of how to actually design the system.

Those two approaches are much more common, whereas what we refer to as a community researcher is really where people from the community are involved from the start to the end. So, the ideas we’re working on, the research questions we’re asking, are co-developed with community members. The technology is co-designed with them. Lots of our crucial technology decisions, like developing a web app that people can use to interact with the robot, developing an alternate mount for the robot so it can actually go onto a wheeled table, not just a wheelchair – ideas like that all came from our community researchers.

Community researchers are then involved in analyzing and understanding the data, and thinking about, “how should we present this to the world,” and then ultimately, co-authoring the paper. This is a much deeper involvement that ensures that, at every step of the way, our research is really grounded in the needs, priorities and perspectives of the target community. As I mentioned, more and more labs are starting to do this, and I’m really excited to see where that will lead.

Miller: Jonathan, what has it meant to you to be a part of this? And, well, maybe the earlier question is, do you feel like your experiences, your expertise, your knowledge has been taken seriously by Amal and his team?

Ko: I do. And I think Amal is really quite brave, taking the robot into the wild, as they say, because it’s … For me, I had some needs that might not have been addressed if you hadn’t done so. I don’t always eat all my meals in my wheelchair and I feel like a lot of the community that can’t feed themselves might not necessarily be getting into wheelchairs. So, as you mentioned, having the robotic arm mountable to a bedside table addresses a bigger community. And I think understanding my needs and the community’s needs is an important part of this research. So, I do feel like there is a contribution.

Miller: Amal, what are other examples of ways that either Jonathan or other community researchers really explicitly changed the course of this work?

Nanavati: Jonathan already mentioned the bedside table example. Another one is, we were trying to understand how we let the users interact with the robot. Multiple users we talked to said, “hey, I already have so many different assistive technologies around my head” – like Jonathan mentioned, for example, a mouth joystick and an adaptive switch – “I don’t want more things to be added,” which makes a lot of sense.

So we were trying to think, how can we give users the level of control they want over this robot, the ability to customize this robot to their needs and preferences, given that preference to not add more technologies around your face? And a suggestion that came from one of our other community researchers, Tyler, was to actually do this by developing a web app, essentially an app that can run in any browser.

The reason for this is that is … He used to run a nonprofit for assistive technologies and in his experience, many of the people who want to learn to use assistive technologies to interact with computational devices want to learn to use their smartphones, their tablets and their laptops. People often already have assistive technologies around their face that make their laptops, smartphones or tablets accessible. By developing a browser app, we can allow them to use those already familiar assistive technologies to also interact with the robot in a very rich way. That’s another example of an insight that I really don’t think we would have gotten without the specific expertise and lived experiences of our community researchers.

Miller: Amal, how far away do you think this technology is from being released commercially, or at least much more broadly?

Nanavati: Our goal with this deployment was to build an MVP that can be taken into users’ homes, MVP being a Minimum Viable Product; and yes, there were certainly some shortcomings that were highlighted, like Jonathan mentioned, some of the issues with face detection. But the flip side is that there was a large variety of meals. There was pizza, there was chicken teriyaki, there was fruit salad, a charcuterie board, etc., that the robot actually was able to feed quite well. And across all meals, Jonathan was able to eat the entire meal.

I think with a dedicated team of a couple of engineers and researchers, on the technical side, we could polish the system out in order to be ready for more people to use in a few years, perhaps. What I think is the greater challenge here is actually what extends beyond the technical side. First of all, driving down the cost so it is a worthwhile purchase, working with insurers to align with their criteria and for coverage, working with regulatory and policy agencies to really craft standards that make sense in this assistive robotic space. And that’s something that we as a research community are increasingly starting to engage with.

With all of that in mind, I can’t quite give a year number, but I think this is something that we’ll keep working towards, and hopefully in the near future, many more people will be able to have robotic assistive aids helping them do activities of daily living.

Miller: Jonathan, as activities of daily living, what else would you like to see? What do you see as the most hopeful future of robotic or AI-powered caregiving?

Ko: That is a very difficult question and I think, in particular, it’s also difficult because there are some issues that we don’t like to speak about publicly. I mean, there’s intimate aspects of caregiving that are difficult to share publicly, and there’s some other needs, too, that are just swept under the rug, especially in terms of getting things paid for. That’s just always a difficult fight, I guess.

Miller: Not to put you on the spot, but something that caregivers of various ages have often to deal with is issues around toileting. Is that an area where you could see a desire to have nonhumans help out?

Ko: I wish we could just talk about this openly. It’s a weird area where people don’t really want to even engage sometimes. So yes, as a personal preference, some people also might not actually want a non-human doing something down there, as I was just gonna say. But I think it’s a matter of me as a person, we’d rather not have anybody else doing it anyway, we’d rather do it ourselves.

Miller: Right.

Ko: But it might not be possible, right?

Miller: As you say, and I don’t mean to push you on this, and I’ll stop … but it gets to a challenging issue here, which is that the exact thing that might concern people about this technology – its lack of humanity – is part of what can make it attractive in other ways. Even if we’re simply talking about feeding, as you were saying, to not be beholden to a caregiver for this. It seems liberating, freeing, giving the person who needs help more autonomy. But there are deep lingering concerns, I think, about what it means to be increasingly reliant on these non-human actors.

Amal, as a researcher and as somebody who does not deal with paralysis, how do you think about these issues?

Nanavati: I think there are lots of very valid concerns when it comes to AI and robotics. And as we’ve discussed in this conversation, there’s lots of promising benefits to it, such as being able to feed yourself independently. The way I see it is, this is a tool that is intended to help people who would like to use a tool for those tasks. I think in order to really make sure we’re taking these perspectives into account, our research needs to be deeply grounded in the needs and priorities of the target community. That’s where community researchers come in.

But beyond that, I think we need to be a bit humble about the fact that we’re not building one robot to solve it all, necessarily, but rather, in certain contexts, this robot could be very useful and could help people feel a sense of empowerment, or achieve certain tasks by themselves.

One thing Jonathan was telling me after the deployment was, “Hey, for breakfast in the morning, I’m in such a rush to get to work, I’d rather not use this type of robot feeding system because it takes additional time to set up or to eat than to just have a caregiver feed me. But for something like dinner, where I’m just relaxing and watching a movie, or talking with a caregiver, it’s much more enjoyable and empowering to use such a robot.”

That is very humbling because I think we, as roboticists, try to build one robot to solve everything, but the reality is that lives are very complex, very contextual, and people will use these tools in a context where it makes sense. And even that can be enough to be a very beneficial addition to the toolkit that people use as part of care routines.

Miller: Amal and Jonathan, thanks very much.

Nanavati / Ko:  Thank you for having us.

Miller: Amal Nanavati is a recent PhD graduate at the University of Washington’s Computer Science & Engineering Department. Jonathan Ko is a Seattle-based patent attorney and a community researcher on this University of Washington Assistive Dexterous Arm project.

Contact “Think Out Loud®”

If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983. The call-in phone number during the noon hour is 888-665-5865.

THANKS TO OUR SPONSOR:

THANKS TO OUR SPONSOR: