EMSGC: event-based motion segmentation with spatio-temporal graph cuts

We develop a method to identify independently moving objects acquired with an event-based camera, i.e., to solve the event-based motion segmentation problem. We cast the problem as an energy minimization one involving the fitting of multiple motion models. We jointly solve two subproblems, namely eventcluster assignment (labeling) and motion model fitting, in an iterative manner by exploiting the structure of the input event data in the form of a spatio-temporal graph.

Project: https://sites.google.com/view/emsgc
Video: https://www.youtube.com/watch?v=ztUyNlKUwcM
Code: https://github.com/HKUST-Aerial-Robotics/EMSGC

 

Fast autonomous exploration with quadrotors

We present an efficient framework for fast autonomous exploration of complex unknown environments with quadrotors. Our approach achieves significantly higher exploration rate than recent ones, due to the careful planning of viewpoints, tours and trajectories.

We further propose a fully decentralized approach for exploration tasks using a fleet of quadrotors. The quadrotor team operates with asynchronous and limited communication, and does not require any central control. The coverage paths and workload allocations of the team are optimized and balanced in order to fully realize the system's potential. Check our recent paper, videos and code for more details.

Authors: Boyu Zhou, Yichen Zhang, Hao Xu, Xinyi Chen and Shaojie Shen
Videos: video 1video 2
Code: https://github.com/HKUST-Aerial-Robotics/FUEL

 

Event-based visual odometry: A short tutorial

Dr. Yi Zhou is invited to give a tutorial on event-based visual odometry at the upcoming 3rd Event-based Vision Workshop in CVPR 2021 (June 19, 2021, Saturday).
The talk covers the following aspects,
* A brief literature review on the development of event-based methods;
* A discussion on the core problem of event-based VO from the perspective of methodology;
* An introduction to our ESVO system and some updates about recent success in driving scenarios.

Workshop Webpage: https://tub-rip.github.io/eventvision2021/
Video: https://www.youtube.com/watch?v=U0ghh-7kQy8&ab_channel=RPGWorkshops
Slides: https://tub-rip.github.io/eventvision2021/slides/CVPRW21_Yi_Zhou_Tutorial.pdf

 

Event-based stereo visual odometry

 

Check out our new work: "Event-based Stereo Visual Odometry", where we dive into the rather unexplored topic of stereo SLAM with event cameras and propose a real-time solution.

Authors: Yi Zhou, Guillermo Gallego and Shaojie Shen

Code: https://github.com/HKUST-Aerial-Robotics/ESVO

Project webpage: https://sites.google.com/view/esvo-project-page/home

Paper: https://arxiv.org/pdf/2007.15548.pdf

 

Quadrotor fast flight in complex unknown environments

We presented RAPTOR, a Robust And Perception-aware TrajectOry Replanning framework to enable fast and safe flight in complex unknown environments. Its main features are:

(a) finding feasible and high-quality trajectories in very limited computation time, and

(b) introducing a perception-aware strategy to actively observe and avoid unknown obstacles.

Specifically, a path-guided optimization (PGO) approach that incorporates multiple topological paths is devised to search the solution space efficiently and thoroughly. Trajectories are further refined to have higher visibility and sufficient reaction distance to unknown dangerous regions, while the yaw angle is planned to actively explore the surrounding space relevant for safe navigation.

Authors: Boyu Zhou, Jie Pan, Fei Gao and Shaojie Shen

Code: https://github.com/HKUST-Aerial-Robotics/Fast-Planner

 

Code for autonomous drone race is now available on GitHub

We released Teach-Repeat-Replan, which is a complete and robust system enables Autonomous Drone Race.

Teach-Repeat-Replan can be applied to situations where the user has a preferable rough route but isn't able to pilot the drone ideally, such as drone racing. With our system, the human pilot can virtually control the drone with his/her navie operations, then our system automatically generates a very efficient repeating trajectory and autonomously execute it. During the flight, unexpected collisions are avoided by onboard sensing/replanning. Teach-Repeat-Replan can also be used for normal autonomous navigations. For these applications, a drone can autonomously fly in complex environments using only onboard sensing and planning.

Major components are:

  • Planning: flight corridor generation, global spatial-temporal planning, local online re-planning
  • Perception: global deformable surfel mapping, local online ESDF mapping
  • Localization: global pose graph optimization, local visual-inertial fusion
  • Controlling: geometric controller on SE(3)

Authors: Fei Gao, Boyu Zhou, and Shaojie Shen

Videos: Video1Video2
Code: https://github.com/HKUST-Aerial-Robotics/Teach-Repeat-Replan

 

Code for VINS-Fusion is now available on GitHub

VINS-Fusion is an optimization-based multi-sensor state estimator, which achieves accurate self-localization for autonomous applications (drones, cars, and AR/VR). VINS-Fusion is an extension of VINS-Mono, which supports multiple visual-inertial sensor types (mono camera + IMU, stereo cameras + IMU, even stereo cameras only). We also show a toy example of fusing VINS with GPS. Features:

  • multiple sensors support (stereo cameras / mono camera+IMU / stereo cameras+IMU)
  • online spatial calibration (transformation between camera and IMU)
  • online temporal calibration (time offset between camera and IMU)
  • visual loop closure.

We are the TOP open-sourced stereo algorithm on KITTI Odometry Benchmark by 12 Jan. 2019.

Authors: Tong Qin, Shaozu Cao, Jie Pan, Peiliang Li and Shaojie Shen

Code: https://github.com/HKUST-Aerial-Robotics/VINS-Fusion