Think Out Loud

OSU’s Guinness World Record-setting robot makes strides for machine learning

By Gemma DiCarlo (OPB)
Oct. 3, 2022 4:55 p.m. Updated: Oct. 17, 2022 7:38 p.m.

Broadcast: Monday, Oct. 3

A robot runs on an orange track.

Cassie, a robot developed by OSU researchers, holds the Guinness World Record for the fastest 100-meter dash run by a bipedal robot.

Courtesy of Kegan Sims

THANKS TO OUR SPONSOR:

Cassie, a robot developed by Oregon State University researchers, is essentially a pair of sophisticated mechanical legs. But those legs just set a Guinness World Record for the fastest 100-meter dash run by a bipedal robot. Running – even quickly – might seem an easy task for a human, but it takes considerably more effort to teach a machine to do so without tripping or veering off course.

Alan Fern is a professor of computer science and artificial intelligence at Oregon State University and the College of Engineering’s executive director of artificial intelligence programs. He explains what Cassie’s accomplishments mean for machine learning and the future of robotics.

Note: The following transcript was created by a computer and edited by a volunteer.

Dave Miller: From the Gert Boyle studio at OPB, this is Think Out Loud. I’m Dave Miller. If I told you that someone recently set a world record for the 100m dash at an Oregon university, you might assume it happened at Hayward Field at the University of Oregon. Hayward Field is, after all, one of the most famous tracks in the country.

But what if it wasn’t someone, but something? In fact, it was a two legged robot named Cassie, at Oregon State University. Cassie is essentially just a pair of sophisticated mechanical legs. But those legs just set a Guinness World Record for the fastest 100m dash run by a bipedal robot.

Alan Fern is a professor of computer science and artificial intelligence at OSU. He joins us now to talk about what Cassie’s accomplishments mean for both machine learning and the future of robotics. Welcome to the show.

Alan Fern: Yeah, thanks for having me.

Miller: We’ve talked about Cassie before, but it’s been a little while. Can you remind us what tasks or physical skills Cassie has already been programmed or learned how to do?

Fern: Sure. So just to review, Cassie is a bipedal robot, but only has legs and a small torso. The idea is that, if you want robots to navigate the world that humans inhabit, you’re gonna want bipeds in some places to get around, because humans are bipeds, and the world’s developed for humans. So we need to have bipedal robots, in particular legs, that can get around as robustly as humans.

Miller: Well what’s an example of a way that not having two legs in a robot could be an impediment?

Fern: One thing is energy efficiency. That’s one of the main differences between humans, bipeds, and other animals. We’re much more efficient. We’re slower, but we’re much more efficient.

Also, navigating tight spaces, trying to reach up high, if you’re a humanoid robot. There’s a lot of situations where requiring balance on four legs is just going to be an impediment.

Miller: Walking, and even running, it can seem like a pretty basic activity for humans without disabilities. What’s the physical challenge for a robot of walking on two legs, and then walking faster and faster, and running?

Fern: I think it’s the same challenge that infants face. Infants grow in strength, but once they’re strong enough to walk, there’s still a lot to learn. Infants have to control up to 200 muscles just to walk. That’s a lot of control signals being sent from your brain, and it takes a lot of time for your brain to learn to balance and then walk.

Same thing for the robot. The robot, in our case, has many fewer things to control. We have 10 motors, five for each leg. But still, that’s 10 electrical signals we need to send to the motors about 2000 times a second in order to get the robot to walk. And if you just send random signals, it’s going to fall down immediately. So that’s the real challenge. How do you figure out the signals to send the robot legs so that they’ll balance, and then walk, and then run, and then skip and go up and down stairs?

Miller: I can imagine it’s a mistake to focus too much on the parallels between a human learning how to walk and a robot. But it also seems inescapable. A toddler normally has hands that they can use to grab onto stuff, and they scoot. Every toddler I’ve seen, they use their hands as a key piece of learning how to walk. That’s something that’s not something you have with Cassie. How did that complicate this process?

Fern: Yeah, that’s interesting. So you’re right, toddlers go through a very interesting and pretty set stage of development where you do use your hands. They scoot around, they crawl, and all sorts of things.

The robot doesn’t have hands. And it’s also much more breakable than toddlers. Toddlers are kind of squishy. They’re built to fall basically. The robot isn’t. That is really what complicates things. If we try to get the robot to learn to walk in the real world, we’re just going to get a broken robot. Toddlers probably go through tens of thousands of falls.

So what we had to end up doing is training the robot in simulation, where it’s safe to fall. And it does this for about the equivalent of a year, but we can do this in a week with parallel computation. And it falls a lot, but in simulation so it doesn’t break. And then we just have to deal with the challenge of, well, simulation is a little bit different from the real world. So you kind of cross your fingers, and you do some engineering tricks to transfer the robot brain from simulation to the real world.

Miller: Wait, so simulation? Meaning the computer programming is good enough that, without actually moving in the real world, you can have the robotic network actually mimic, in a close enough way, what would happen if 20 of those motors move in very minute ways over the course of one second? And you can do that well enough that it would actually translate pretty well to what would happen if those motors were to actually move?

Fern: Yeah. We were surprised as well. You do have to do some tricks, and there are limitations. Actually, this world record, we were trying to push those limitations.

One of the basic tricks is we randomize that simulated world. So think of it as a video game that this robot finds itself in. And we’re going to randomize things. Every trial of this video game, we’re going to randomize the friction of the ground, we’re going to randomize different aspects of its weight that we can’t model exactly in simulation compared to the real world. And so the robot has to learn to be robust to these variations in the simulation. And by doing that, you’re hoping, and it seems to be the case, that it can handle the real world.

THANKS TO OUR SPONSOR:

If you don’t do that variation, and you just train in one fixed simulation with one fixed set of physics, it does not transfer to the real world nearly as well.

Miller: What was your starting point? Did you have to tell the robot, the algorithm that runs this mechanically, “the ultimate goal here is for you to run as fast as you can. And if you fall down, for example, that’s bad. Whatever led to that, don’t do that again”? I’m curious how much of an overall goal you had to give it.

Fern: That’s a very good question, because how do you actually describe what it means to walk and run? That’s something that we actually were challenged by for a couple of years.

What we ended up doing is we’re following this algorithm design paradigm called reinforcement learning, which is inspired by Pavlovian psychology. So in the simulation, we do something very much like what you said. So when the robot falls, we give it a negative reward. It’s not pain or anything, this is just a machine. But it’s a negative number that says “uh, that’s not what we want you to do.”

When it is closer to some sort of movement that looks like running, it gets a larger positive reward. And the closer it gets to following the commands that we want it to follow, for example running, the more reward we give it. This is reinforcement learning, and it gets these rewards, similar to how humans and animals learn, by a series of rewards and reinforcements. And at first, the robot’s randomly flailing around, trying to just balance, getting mostly negative rewards. And by chance, it happens to stand and balance for a second, and it gets a positive reward, and that gets reinforced. And if you do this for a billion time steps in simulation, eventually you get something that is locomoting really well.

Miller: It seems like what we’re talking about here is machine learning and neural networks, phrases I’ve heard a lot over the last couple of years and are always slightly confusing to me. Is this machine learning?

Fern: Yes, this is exactly machine learning. Machine learning and neural networks, which are just computational programs that learn, and are loosely inspired by the brain. You don’t want to take the analogy to human learning and the brain very far. But they’re loosely inspired by the brain. And neural networks, for example, are used in your phones. They’re the reason why we can talk to our phones and actually have the phones somewhat understand us today. The reason why we can have computers look at images now, and somewhat understand what’s in those images.

But here, we’re trying to apply those same ideas, the same neural networks and machine learning, for bipedal locomotion in robots. It’s a really interesting paradigm because we’re not programming step by step what each motor has to do. We’re programming a system that can learn. And then the engineering job is to program the environment to train in, and to program the reward signals that you give the system.

It’s really interesting because at the end of the day, this is the only way that we know how to get our robot to walk as well as it currently does.

Miller: This is to me the most interesting and confounding piece of this. I can imagine other ways to try to train a robot to run. Including, say, you could take Usain Bolt, the fastest person in history to run 100m, way faster than Cassie. You could put a suit on him with all kinds of sensors and have him run around a lot and collect all kinds of data, which I imagine would show you what he’s doing with his foot and his ankle and his knee and his. And then turn that into a set of directions. And you could then tell your robot, “do what you say Usain Bolt is doing, it works really well for him.” Why wouldn’t that work?

Fern: Well, there’s basic physical differences. That’s probably the number one answer to that right now. The hardware of Usain Bolt and Cassie are very different.

But some of the basic principles are similar. It is interesting, in trying to get the robot to run really fast, we spanned a giant span of the frequency of leg movement, how large the stride should be. And among that space, what ended up happening is the sweet spot was almost exactly where the sweet spot was for humans. This is in a paper that we have. But it was very interesting that this frequency versus stride profile for different speeds was matching what humans do.

Now we didn’t get inspired from the human profiles, but that’s just what came out naturally. But certainly, in robotics in general, we’re gonna find that learning from humans, for example just doing everyday things around the house, is going to be much more effective if we do actually have cameras watching humans do it and have the robots try to mimic. But something like sprinting, which depends so much on the specifics of the physiology, I’m not sure that that would be as useful.

Miller: So in the end, Cassie ran 100m a lot slower than Usain Bolt, but also a lot slower than the average 12 year old. What are the implications then of this achievement?

Fern: Well, getting the record for the robot wasn’t really the primary objective, that was just a fun thing the students wanted to do. Our primary goal is to figure out how we train robots to do really useful things.

For example, we would love to be able to train robotic systems that can produce houses 10 times the speed that we currently can and twice as cheaply. That would be great to overcome some of the challenges we have, or other infrastructure building projects. But to get there, we have to figure out how to make these systems robust. And simulation training, moving to the real world seems like one of the big places where we can really try to make some progress there.

And so what we do is we just try to push the limits of that. So going really fast really pushes the limits of the match between simulation and the real world. And so that was the challenge we picked here. We overcame some of the challenges there, and now we’ll move on to the next challenges.

Miller: Could any of this research translate not to robotics, but to humans? Could it help people with mobility issues, for example?

Fern: Yeah, absolutely. I’ve got a colleague at Oregon State in fact, Dr. John Mathews, he works on trying to brain muscle interfaces, where you’re trying to decode signals from the brain and then encode them for muscles for people who have different levels of paralysis. The same type of technology can be used, because those systems need to learn. And it’s a very interesting system, because the human and the computer system are learning in parallel.

And so yeah, you’re gonna see this type of thing used for applications like that, and it’s gonna be great to see.

Miller: In terms of your teams, you previously had done an endurance test, a 5km run on one battery for this robot. And now the speed test. What’s next?

Fern: Well, there’s a couple of things we’re gonna be looking at next. One is, I didn’t mention that Cassie is blind. Cassie only does locomotion by feel. We sort of use the remote control to guide Cassie. Humans can walk just fine and run just fine with their eyes closed.

So what we’re gonna be doing now is putting a camera on Cassie and do some more interesting things that require perception. For example, stepping stones. Being able to go across a river. We’re not gonna go across the river with Cassie quite yet, but where you have to have very precise footstep locations that you plan ahead. We want to be able to go to a random sidewalk and have Cassie go to a random front door from that sidewalk without our intervention, using common sense to not walk over flower beds and things like that. So that’s one area.

The other area is going to be moving into humanoids. Cassie is developed by Agility Robotics, which is located in Albany, Oregon. It was founded by Jonathan Hurst, one of my collaborators. They are developing a humanoid called Digit, which has arms and a head. And we’re very excited to start working with Digit to do things such as move boxes around a warehouse, or move the laundry basket from your bottom floor to the top floor of your house. We’re very excited to take the work we’ve done in locomotion, and now add arms and be able to actually do real work.

Contact “Think Out Loud®”

If you’d like to comment on any of the topics in this show, or suggest a topic of your own, please get in touch with us on Facebook or Twitter, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983. The call-in phone number during the noon hour is 888-665-5865.

THANKS TO OUR SPONSOR:
THANKS TO OUR SPONSOR:

Related Stories