Human-Robot Navigation using Event-based Cameras and Reinforcement Learning

1Department of Electrical Engineering, Universidad de Chile, Chile, 2Advanced Mining Technology Center (AMTC), Universidad de Chile, Chile, 3Institute of Engineering Sciences, Universidad de O'Higgins, Chile

Video

Abstract

This work introduces a robot navigation controller that combines event cameras and others sensors with reinforcement learning to enable real-time human-centered navigation and obstacle avoidance. Unlike conventional image-based controllers, which operate at fixed rates and suffer from motion blur and latency, this approach leverages the asynchronous nature of event cameras to process visual information over flexible time intervals, enabling adaptive inference and control. The framework integrates event-based perception, additional range sensing, and policy optimization via Deep Deterministic Policy Gradient, with an initial imitation learning phase to improve sample efficiency. Promising results are achieved in simulated environments, demonstrating robust navigation, pedestrian following, and obstacle avoidance.

A demo video can be watched in: https://youtu.be/dF8_ektJ8Nk.

BibTeX

@InProceedings{bugueno2025hrnavigation,
  author="Bugueno-Cordova, Ignacio
  and Ruiz-del-Solar, Javier
  and Verschae, Rodrigo",
  title="Human-Robot Navigation using Event-based Cameras and Reinforcement Learning",
  conference="IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)",
  year="2025",
}