February 17, 2023
Anca Dragan of UC Berkeley
I discovered AI by reading “Artificial Intelligence: A Modern Approach” (AIMA). What drew me in was the concept that you could specify a goal or objective for a robot, and it would be able to figure out on its own how to sequence actions in order to achieve it. In other words, we don’t have to hand-engineer the robot’s behavior - it emerges from optimal decision making. Throughout my career in robotics and AI, it has always felt satisfying when the robot would autonomously generate a strategy that I felt was the right way to solve the task, and it was even better when the optimal solution would take me a bit by surprise. In “Intro to AI” I share with students an example of this, where a mobile robot figures out it can avoid getting stuck in a pit by moving along the edge. In my group’s research, we tackle the problem of enabling robots to coordinate with and assist people: for example, autonomous cars driving among pedestrians and human-driven vehicles, or robot arms helping people with motor impairments (together with UCSF Neurology). And time and time again, what has sparked the most joy for me is when robots figure out their own strategies that lead to good interaction - when, as in the work your very own faculty Dorsa Sadigh did in her PhD, we don’t have to hand-engineer that an autonomous car should inch forward at a 4-way stop to assert its turn. Instead, the behavior emerges from optimal decision making. So for this seminar, I'd like to step back a bit. Rather than going through one particular piece of research, I will take the opportunity to share what I've found the underlying optimal decision making problem formulation is for HRI -- and reflect on how we've set up optimal decision making problems that require the robot to account for the people it is interacting with, along with the surprising strategies that have emerged from that along the way. This has come back full circle for me, as I got to include some of this perspective in the very book that drew me into the field, by editing the robotics chapter for the 4th edition of AIMA.
About the speaker:
I am an Associate Professor in the EECS Department at UC Berkeley. My goal is to enable robots to work with, around, and in support of people. I run the InterACT Lab, where we focus on algorithms for human-robot interaction -- algorithms that move beyond the robot's function in isolation, and generate robot behavior that coordinates well with people, and is aligned with what we actually want the robot to do. We work across different applications, from assistive arms, to quadrotors, to autonomous cars, and draw from optimal control, game theory, reinforcement learning, Bayesian inference, and cognitive science. I also helped found and serve on the steering committee for the Berkeley AI Research (BAIR) Lab, and am a co-PI of the Center for Human-Compatible AI. I've been honored by the Sloan Fellowship, MIT TR35, the Okawa award, an NSF CAREER award, and the PECASE award. Learn more: people.eecs.ber...
Негізгі бет Stanford Seminar - Robotics algorithms that take people into account
Пікірлер: 2