Obstacle location classification and self-localization by using a mobile omnidirectional camera based on tracked floor boundary points and tracked scale-rotation invariant feature points

Tsuyoshi Tasaki, Seiji Tokura, Takafumi Sonoura, Fumio Ozaki, Nobuto Matsuhira

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

For a mobile robot self-localization and knowledge of the location of all obstacles around it is essential. Moreover, classification of the obstacles as stable or unstable and fast self-localization using a single sensor such as an omnidirectional camera are also important to achieve smooth movements and to reduce the cost of the robot. However, there are few studies on locating and classifying all obstacles around the robot and localizing its self-position fast during its motion by using only one omnidirectional camera. In order to locate obstacles and localize the robot, we have developed a new method that uses two kinds of points that can be detected and tracked fast even in omnidirectional images. In the obstacle location and classification process, we use floor boundary points where the distance from the robot can be measured using an omnidirectional camera. By tracking those points, we can classify obstacles by comparing the movement of each tracked point with odometry data. Our method changes a threshold to detect the points based on the result of this comparison in order to enhance classification. In the self-localization process, we use tracked scale and rotation invariant feature points as new landmarks that are detected for a long time by using both a fast tracking method and a slow Speed Up Robust Features (SURF) method. Once landmarks are detected, they can be tracked fast. Therefore, we can achieve fast self-localization. The classification ratio of our method is 85.0%, which is four times higher than that of a previous method. Our robot can localize 2.9 times faster and 4.2 times more accurately by using our method, in comparison to the use of the SURF method alone.

Original languageEnglish
Pages (from-to)1012-1023
Number of pages12
JournalJournal of Robotics and Mechatronics
Volume23
Issue number6
Publication statusPublished - 2011 Dec

Fingerprint

Cameras
Robots
Mobile robots
Sensors
Costs

Keywords

  • Mobile robot
  • Obstacle classification
  • Omnidirectional camera
  • Self-localization

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Science(all)

Cite this

Obstacle location classification and self-localization by using a mobile omnidirectional camera based on tracked floor boundary points and tracked scale-rotation invariant feature points. / Tasaki, Tsuyoshi; Tokura, Seiji; Sonoura, Takafumi; Ozaki, Fumio; Matsuhira, Nobuto.

In: Journal of Robotics and Mechatronics, Vol. 23, No. 6, 12.2011, p. 1012-1023.

Research output: Contribution to journalArticle

@article{be2e3bc167cc43c48d119a0ab83b700c,
title = "Obstacle location classification and self-localization by using a mobile omnidirectional camera based on tracked floor boundary points and tracked scale-rotation invariant feature points",
abstract = "For a mobile robot self-localization and knowledge of the location of all obstacles around it is essential. Moreover, classification of the obstacles as stable or unstable and fast self-localization using a single sensor such as an omnidirectional camera are also important to achieve smooth movements and to reduce the cost of the robot. However, there are few studies on locating and classifying all obstacles around the robot and localizing its self-position fast during its motion by using only one omnidirectional camera. In order to locate obstacles and localize the robot, we have developed a new method that uses two kinds of points that can be detected and tracked fast even in omnidirectional images. In the obstacle location and classification process, we use floor boundary points where the distance from the robot can be measured using an omnidirectional camera. By tracking those points, we can classify obstacles by comparing the movement of each tracked point with odometry data. Our method changes a threshold to detect the points based on the result of this comparison in order to enhance classification. In the self-localization process, we use tracked scale and rotation invariant feature points as new landmarks that are detected for a long time by using both a fast tracking method and a slow Speed Up Robust Features (SURF) method. Once landmarks are detected, they can be tracked fast. Therefore, we can achieve fast self-localization. The classification ratio of our method is 85.0{\%}, which is four times higher than that of a previous method. Our robot can localize 2.9 times faster and 4.2 times more accurately by using our method, in comparison to the use of the SURF method alone.",
keywords = "Mobile robot, Obstacle classification, Omnidirectional camera, Self-localization",
author = "Tsuyoshi Tasaki and Seiji Tokura and Takafumi Sonoura and Fumio Ozaki and Nobuto Matsuhira",
year = "2011",
month = "12",
language = "English",
volume = "23",
pages = "1012--1023",
journal = "Journal of Robotics and Mechatronics",
issn = "0915-3942",
publisher = "Fuji Technology Press",
number = "6",

}

TY - JOUR

T1 - Obstacle location classification and self-localization by using a mobile omnidirectional camera based on tracked floor boundary points and tracked scale-rotation invariant feature points

AU - Tasaki, Tsuyoshi

AU - Tokura, Seiji

AU - Sonoura, Takafumi

AU - Ozaki, Fumio

AU - Matsuhira, Nobuto

PY - 2011/12

Y1 - 2011/12

N2 - For a mobile robot self-localization and knowledge of the location of all obstacles around it is essential. Moreover, classification of the obstacles as stable or unstable and fast self-localization using a single sensor such as an omnidirectional camera are also important to achieve smooth movements and to reduce the cost of the robot. However, there are few studies on locating and classifying all obstacles around the robot and localizing its self-position fast during its motion by using only one omnidirectional camera. In order to locate obstacles and localize the robot, we have developed a new method that uses two kinds of points that can be detected and tracked fast even in omnidirectional images. In the obstacle location and classification process, we use floor boundary points where the distance from the robot can be measured using an omnidirectional camera. By tracking those points, we can classify obstacles by comparing the movement of each tracked point with odometry data. Our method changes a threshold to detect the points based on the result of this comparison in order to enhance classification. In the self-localization process, we use tracked scale and rotation invariant feature points as new landmarks that are detected for a long time by using both a fast tracking method and a slow Speed Up Robust Features (SURF) method. Once landmarks are detected, they can be tracked fast. Therefore, we can achieve fast self-localization. The classification ratio of our method is 85.0%, which is four times higher than that of a previous method. Our robot can localize 2.9 times faster and 4.2 times more accurately by using our method, in comparison to the use of the SURF method alone.

AB - For a mobile robot self-localization and knowledge of the location of all obstacles around it is essential. Moreover, classification of the obstacles as stable or unstable and fast self-localization using a single sensor such as an omnidirectional camera are also important to achieve smooth movements and to reduce the cost of the robot. However, there are few studies on locating and classifying all obstacles around the robot and localizing its self-position fast during its motion by using only one omnidirectional camera. In order to locate obstacles and localize the robot, we have developed a new method that uses two kinds of points that can be detected and tracked fast even in omnidirectional images. In the obstacle location and classification process, we use floor boundary points where the distance from the robot can be measured using an omnidirectional camera. By tracking those points, we can classify obstacles by comparing the movement of each tracked point with odometry data. Our method changes a threshold to detect the points based on the result of this comparison in order to enhance classification. In the self-localization process, we use tracked scale and rotation invariant feature points as new landmarks that are detected for a long time by using both a fast tracking method and a slow Speed Up Robust Features (SURF) method. Once landmarks are detected, they can be tracked fast. Therefore, we can achieve fast self-localization. The classification ratio of our method is 85.0%, which is four times higher than that of a previous method. Our robot can localize 2.9 times faster and 4.2 times more accurately by using our method, in comparison to the use of the SURF method alone.

KW - Mobile robot

KW - Obstacle classification

KW - Omnidirectional camera

KW - Self-localization

UR - http://www.scopus.com/inward/record.url?scp=84856242753&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84856242753&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:84856242753

VL - 23

SP - 1012

EP - 1023

JO - Journal of Robotics and Mechatronics

JF - Journal of Robotics and Mechatronics

SN - 0915-3942

IS - 6

ER -