본문 바로가기
On Going/Computer Vision

SLAM (Simultaneous Localization and Mapping) 관련 개념정리

by 에아오요이가야 2024. 6. 18.

SLAM은 이름에서부터 알 수 있듯 Localization과 Mapping을 동시에 하는 알고리즘이다.

 

Key concpets of SLAM

1. Localization : Determining the position and orientation of the robot within the map. (map상에서 robot의 위치와 방향을 결정)

2. Mapping : building a map of the environment using sensor data. (sensor data를 이용한 주변 공간의 map 생성)

3. Sensors : Typically involves various types of sensors, such as Lidar, cameras. IMUs(Inertial measurement Units), and sonar, to collect data about the environment.

 

4. Algorithm : several algorithms are used for SLAM, including:

- EKF (Extended Kalman Filter) SLAM : Uses probabilistic methods to estimate the position and build the map(확률론을 이용하여 position을 추정하고 map을 생성함)

- Particle Filter SLAM : Also known as Monte Carlo Localization, uses a set of particles to represent possible locations of the robot. (robot의 후보 위치를 set of particles로 추정, Monte Carlo Localization로 알려져 있음)

- Graph-Based SLAM : Builds a graph where nodes represent poses of the robot and edges represent constraints between poses. (node가 robot의 pose를 나타내고, edge가 constraints between poses를 나타내는 graph로 표현하는 방법)

 

Challenges in SLAM

1. Dynamic Environments : Moving objects and changes in the environment can complicate the mapping and locaization process. (움직이는 물체와 환경의 변화가 방해가 될 수 있음)

2. Sensor Noise : Sensors can be inaccurate or noisy, which affects the quality of the map and localization. (sensor자체 문제)

3. Computational Complexity : SLAM can be computationally intensive, requiring efficient algorithms and hardward. (계산 오래 걸림)

4. Data Association : Identifying whether a feature observed at different times corresponds to the same physical feature. (동일한 물체를 다르게 찍었을 때 같은 것이라고 확인하기 어려움)

 

Recent Advances 

1. Visual SLAM : Uses camera data to perform SLAM, which is particulary useful for indoor environments where LIDAR may not be effective. (camera를 이용한 SLAM, - 실내환경에 유리하고 Lidar가 비효율적인 곳에 활용하면 좋음)

2. RGB-D SLAM : Uses RGB-D cameras, which provide color and depth information, to improve the accuracy of SLAM. (RGB-D(depth) camera를 이용한 SLAM) 

3. Deep Learning :  Integrating deep learning techniques to improve feature extraction and data association in SLAM.

(feature extraction 기능과 data association기능 향상을 위해 deep learning과의 협업)

 

Tools and Libraries

- ROS (Robot Operating System) : provides a flexible framework for writing robot software, including SLAM packages like gmapping, cartographer, and ORB-SLAM 

- OpenCV : visual SLAM에서 사용

- pcl : LIDAR-based SLAM에서 사용

'On Going > Computer Vision' 카테고리의 다른 글

위성영상에 대한 이해  (1) 2024.07.05
SLAM의 input과 output에 대해 알아보자  (0) 2024.06.18
Multi-modal learning  (1) 2023.12.07
Landmark localization  (1) 2023.12.05
Segmentations(Instance & Panoptic)  (1) 2023.12.05

댓글