PITTSBURGH — The biggest potential for robots and automation isn’t taking jobs from humans, but instead making those jobs better by allowing people to do what they do best.
That’s one of the key takeaways from a recent visit to the Human and Robot Partners (HARP) Lab at Carnegie Mellon University in Pittsburgh, led by Henny Admoni, an assistant professor at CMU’s prestigious Robotics Institute.
“A lot of classic automation takes the angle of, how do we get the person out of the way as much as possible. And often when it does think about people, it treats people like dynamic obstacles, or like just another element of the environment to avoid running into,” Admoni explained. “But my argument is that people actually give us a lot that robots can work with, when they’re collaborating.”
This reflects a larger theme that came through in GeekWire’s recent return to Pittsburgh: Due in part to the labor shortages, the national conversation about robotics and automation is shifting the conversation from threat to opportunity — from putting jobs at risk to filling critical gaps in the workforce.
“[W]e need to think about them as complementing human work, complementing skilled humans, making work better, making human work more fulfilling, more valuable,” said Jeff Wilke, the former Amazon Worldwide Consumer CEO, and current chairman of Re:Build Manufacturing, in an interview published this week.
Admoni and her colleagues take that concept into settings including homes and kitchens, exploring how robots can help people with disabilities, home healthcare workers, and others in their daily lives and work.
Continue reading for highlights from our recent interview with Admoni, edited for clarity and length.
GeekWire: How would you describe the current focus of your research?
Admoni: We think a lot about how robots can be good partners for people. The key element that we bring to our research is that we need to understand people in order to build robots for them. And so a lot of what we do starts from the cognitive science of how people make decisions, how people process information, how people express their intent through body language, or nonverbal behaviors. And then we try to build algorithms that take advantage of those things, or are sensitive to those signals, in order to make the robot smarter and more assistive and more collaborative.
We work in a lot of different domains. I’m particularly inspired by the assistive domain, where robots can help people who have a variety of impairments live more independently, and regain some of the capacity that they might have lost. I think that’s a really exciting domain. I’m now getting into working on AI for older adults, as well. So a very similar concept. But in general, I’m really interested in how we can make robots that are meaningfully better for people.
Much of the discussion about the impact of robotics on jobs happens in the context of industrial or commercial settings. What are the implications for caregivers, and to what extent have the labor shortages of the past couple years changed the dynamics around your research?
It certainly has an effect. We think a lot about the impact of our work on the people that we’re going to deploy it with. We’re having more and more conversations in the field now about who is impacted by this work, and what it means to create a robot that does a particular role.
And the reality is that it doesn’t just affect the person who’s directly interacting with the robot. So in the case of assistive robots, if you give somebody the capacity to eat independently, now the care partner who had been helping them eat, can actually eat themselves. … Now you have the opportunity to let the care partner have social interaction and have a social meal.
A lot of the motivation of my work is, how do we leverage what robots can do for people to free up people to do the things that people are good at. And the shorthand for this is, let robots be robots. Let robots take over the tasks that robots are the best at. Don’t try to replace the human interaction, try to make it so that human interaction is more possible.
So in that context, robots aren’t replacing human workers and taking jobs.
I see robots as much more partnering with people who are doing jobs, and making them better.
Now, there’s no doubt that automation removes jobs. We’ve seen it with ATMs. We’ve seen it with grocery store checkouts, and we’re going to continue seeing it. It’s a really big problem. And I think it is important that we talk about what we do with people who are displaced by these automation systems.
But in my dream world, we would be building technologies that are not replacing jobs, but that are making people more effective at their jobs and freeing them up to do more of what they are good at in their jobs.
I would imagine that the biggest challenges are in the area of AI and cognitive systems, because the hardware can do what the hardware can do, but it has to know what to do.
From my perspective, yes. I think if you asked a hardware person, they would say, getting the right kind of gripper is really challenging. We saw the Amazon Picking Challenge a few years ago, where the winning gripper was a suction cup because they hyper-specialized to the task that the robot had to do, which was pick up these specific kinds of objects, and they could achieve it with a suction cup, but that’s not going to work for feeding someone.
What I’m really excited about are the algorithmic and AI challenges. How do you identify what somebody’s trying to do so that the robot can pick the right assistive action to support them? How do you personalize to people? People might want things done in different ways or, more interestingly, their ability to signal the robot might change over time.
So in the realm of AI, what is the biggest challenge you would pursue right now if you were given unlimited resources?
There are so many interesting challenges. Long-term in-home robotics is a big, open challenge. Things like adaptation and personalization, communication between people and robots, building rapport and trust, making robots explainable. This falls into the Explainable AI space, so that people can understand why a robot did what it did. And they can decide if they want to use the robot for a particular situation or not. I think that would be super interesting.
The other application I think is really pressing right now is autonomous vehicles, and how people use autonomous vehicles both inside the car and as people surrounding the vehicles, pedestrians or other drivers.
What’s the state of the art in the field of automation right now?
We are very good at automating in structured environments. Things like automating factories, having robots that build cars, or robots that handle difficult or toxic material in a fixed way, in an environment where there are no other obstacles, we’ve maxed out, in the field. I think there are iterations that we could do. But all of the iterations are about, how do you deal with uncertainty? How do you deal with an object coming down the conveyor belt that’s not the orientation you expect?
Apparently you use a suction cup.
Yeah, I don’t think Amazon got what they wanted out of that challenge.
So I think in terms of automation, if we have a structured environment, we are excellent in the field.
As soon as you start to add uncertainty, we start to see that we still need some innovation, both in the perception of that uncertainty and acting on that uncertainty. And then once you add humans into the environment, I think we’re not really that close to seeing robots out really autonomously in the world.
We do see some. But honestly, the best autonomous robot out in the world, in the human environment right now, is the Roomba. And it’s been the best robot we’ve had for 15 years. Because it does one thing. And it does it really well. And it doesn’t try to interact with people in any meaningful way.
On that topic, Amazon seems to be positioning its Astro home robot as both a companion and a security system. To what extent does that reflect the broader trends that you’re seeing?
Astro comes as a robot in a long line of commercial home robots like Jibo, and Kuri from Mayfield Robotics, that were trying to be a multipurpose social companion. And I think that is a very hard market role to fill. If anybody can do it, Amazon can do it, because they have so much money and so many resources behind them.
But I think it’s a mistake to try to position a security robot as also this cute companion that you can talk to that will play music for you and remind you about your events. Let robots be robots. Let it be a security robot, if that’s what you want, but then don’t make it look like a dog. Because people have expectations of what a pet is like. And I think that’s going to trip them up as they’re trying to market it to people. I think it’s trying to do too much.
What else is important to get across about your work?
I also think a lot about equity in robotics. There are more and more conversations now about AI and diversity and equity. And I think the same is true in robotics. So anytime that we present technological systems that are trained on real-world data, or that are trying to execute real-world tasks, we’re at risk of integrating real world biases.
So something that we also think about is, how do we use robots to make things more equitable for people, to make it so that somebody who doesn’t have family nearby to take care of them is also taken care of as well as somebody who is lucky enough to have a lot of support resources and things like that. I think that’s possible.
But I think it’s very easy to go the other direction, and end up with AI systems that can’t do facial recognition on people of color. And so we’re constantly thinking about the role of our technology in terms of equity in society.