
It wasn’t until the late 1980s and the introduction of commercially viable GPS systems that Lidar data became a useful tool for providing accurate geospatial measurements. Since then, research and development have rapidly advanced and improved Lidar technology.
What is SLAM?
SLAM stands for Simultaneous Localization and Mapping (sometimes called Synchronized Localization and Mapping). It is the process of mapping an area while keeping track of the location of the device within that area. This is what makes mobile mapping possible, allowing the digitization of large areas in much shorter spaces of time. SLAM systems simplify data collection, providing an avenue for scanning outdoor or indoor environments.
Complex Algorithms That Map An Unknown Environment
Using SLAM software, a device can simultaneously localize (locate itself in the map) and map (create a virtual map of the location) using SLAM algorithms.
Basic Positional Data, Using An Inertial Measurement Unit (IMU)
Using this sensor data, the device computes a ‘best estimate’ of where it is. The collection of new positional information occurs every few seconds, features align, and the estimate improves.
There Are Many Different Types Of Algorithms and Approaches To SLAM
Graph SLAM
EFK SLAM
Fast SLAM
Topological SLAM
Visual SLAM
2D Lidar SLAM
3D Lidar SLAM
ORB SLAM
Mobile mapping devices use Visual and Lidar SLAM to produce point clouds.
What Is Visual SLAM?
Visual SLAM calculates the position and orientation of a device with respect to its surroundings while mapping the environment at the same time, using only visual inputs from a camera.
Feature-based visual SLAM typically tracks points of interest through successive camera frames to triangulate the 3D position of the camera. This information builds a 3D map.
For the complete article on SLAM basics CLICK HERE.














