Research

Free viewpoint image generation for robot teleoperation

Virtual third-person view image generation for indoor scene

For multiple construction machines

Robot teleoperation is important when the working environment is dangerous for humans such as at disaster sites. For effective teleoperation, operators can benefit from viewing the robot’s surroundings in a third-person perspective. However, arranging an external camera to provide such views is challenging. Therefore, we propose methods to generate virtual third-person perspective images using sensors attached to the robot.

We have installed four fisheye cameras on the robot to capture 360° images, and by using the additional depth sensors on the robot, virtual third-person perspective images called free viewpoint images are generated. Please refer to the papers  here  and  here  for further explanation.

Dense depth prediction using multiple fisheye cameras

360 degree depth prediction using fisheye cameras

Depth estimation from multi-view images is useful for scene understanding and robot navigation. We work on 360 degree dense depth prediction using four fisheye cameras installed on a robot. To cope with the large distortion in fisheye cameras, we propose an icosahedron-based representation, and we employ icospherical sweeping to integrate the multi-view features into the 360 degree costmap.

We also focus on computational efficiency, so that the depth is estimated from four fisheye images in less than a second using a laptop with a GPU. Please refer to the papers  here  and  here  for further explanation.

Robotics for Nuclear Application

Our laboratory has been actively working on robotics for nuclear environments since the Fukushima nuclear accident in 2011. I’ll introduce some of the projects I’ve been involved in.

Radiation source estimation

The first step in decommissioning the Fukushima Daiichi nuclear power plant is to reduce radiation levels at the site so that workers can stay longer and be safer. Therefore, it is important to determine the distribution of radiation sources and to decontaminate the estimated radiation sources. Due to high radiation levels, it is desirable to install radiation detectors on a robot, and the robot will autonomously explore the site to estimate the radiation sources.

Path planning for radiation estimation

Radiation estimation using filtered back-projection

We developed a path planning method to localize the radiation sources. This method determines the next measurement point based on previous measurements, allowing for autonomous exploration. By utilizing principal component analysis of the simple back-projection results, the robot automatically moves towards the radiation sources and circles around to accurately localize the radiation sources. Please refer to the paper here.

We also have improved the radiation source estimation by utilizing filtered back-projection. Please refer to the paper here.

If you are interested in radiation source estimation using non-directional detectors such as Geiger counter, please check out the paper here as well.

Gamma ray irradiation experiment on camera

Gamma ray irradiation image noise and the simulated noise

Radiation affects not only people but also electronic equipment. Therefore, we need to consider the effects of radiation on the electronic equipment of robots for nuclear applications. Firstly, electronic devices will malfunction if they are exposed to radiation for a long time, which is called the total ionising dose (TID) effect.

Sometimes the critical components can be protected by heavy metals such as lead or moved away from high levels of radiation, but cameras are one of the components that are difficult to protect from radiation.We have carried out several gamma ray irradiation tests on commercial off-the-shelf cameras to investigate the TID effect. Also, we investigated the image noise caused by gamma radiation. Please refer to the paper here.

Best viewpoint for operator to control robot arm

Robot teleoperation

For the decommissioning of the Fukushima Daiichi nuclear power plant, we need to retrieve the nuclear fuel debris that remains inside the primary containment. One approach would be to use the teleoperation of a robotic arm. The viewpoints provided to the operator are carefully selected to achieve efficient teleoperation. Therefore, we are woking on viewpoint selection for robot teleoperation. Please refer to the papers  here  and  here  for further explanation.

Navigation and control for robot arm

Robot arm developed for the project

We are also working on the navigation and control algorithm of a robotics manipulator for nuclear fuel debris retrieval. This is an ongoing joint project with the University of Sussex.

Robotics for Constructions

Decreasing labor force in the field of construction is one of the biggest social problems. To tackle this problem, we are collaboratively working with various companies on several projects to develop core technologies, with the aim to make the current construction more efficient and automate certain processes.

Action recognition of construction machine

Action recognition of excavator using keypoints

To enhance the efficiency of the current construction, it is crucial to monitor the progress of tasks to identify the bottleneck. We focus on tracking the progress of excavators as they are one of the most commonly used construction machines.

We install a camera at the construction site and apply computer vision algorithms to recognise the excavator’s actions and monitor its progress. One of the problems is that, unlike human action recognition, there is a limited amount of training data for excavator action recognition. Therefore, we utilize computer simulations to prepare the motion data for the training. This is still on-going project.

Visual Stereo SLAM in dynamic construction environments

Extracted feature points with dynamic info labels for visual SLAM

There is a necessity to estimate the poses of construction machines for autonomous construction. In certain scenarios, such as near mountains and valleys, the GNSS signal is unstable. We are working on Visual Simultaneous Localization and Mapping (Visual SLAM) to localize construction machines.

Typically, Visual SLAM operates under the assumption that the environment is static. However, there are usually other construction machines moving around the site, which breaks this assumption and leads to poor localization accuracy. To address this problem, we employed object detection and semantic segmentation to determine the pixels that are unlikely to be static and reject them from the visual SLAM algorithm. We also utilized a hierarchical approach for efficient computation. Please refer to the paper here.

Innovative constructions

Earth moving task using multiple construction machines in OperaSim

Our laboratory is working on team organization and teleoperation of multiple construction machines as a part of the CAFE project. The CAFE project aims to develop AI robots by 2050 that adapt to diverse environments and act alongside humans. Focusing on natural disaster response and moonbase development, the project utilizes translatable technology between these fields. Please check CAFE project website for detail.

I’m mainly working on the teleoperation of multiple robots by one operator. Ideally, multiple robots should be able to perform the task autonomously without the help of an operator. However, the disaster sites are highly uncertain so in some situations the robots cannot perform the given tasks by themselves. It is also difficult for an operator to remotely control each robot individually. The main idea is to rely on the autonomy of the robots. The operator only provides detailed instructions when the robots cannot handle the situation properly. This is an ongoing project.