Artificial Intelligence Assists with Reality Capture Workflows

Tony Sabat from the SSOE Group explains how artificial intelligence is the next step in improving the reality capture workflow.

If a picture is worth a thousand words, is a point cloud worth a million? And if a point cloud is worth a million, how much is a point cloud extraction? The race for reality capture hardware has taken a turn and started a new branch. Now that the industry understands the use case for reality capture, the debate is not for or against utilizing the technology, but which hardware to use.

Once that debate is settled, the most difficult aspect of reality capture becomes the method of extracting the data needed from the point cloud for downstream decision making. The tools used to extract the data have almost become as important, if not more than, in order to make such dense data consumable.

Extraction tools range from cross sectioning routines to data decimation features which look to lighten the data so other programs can begin to utilize the original dense data. With new and better features coming along the next step has begun to be explored. Artificial intelligence variations have begun to reach the consumer market. For example, solutions are now being looked at to recognize point clusters and identify them as features or at least recognize distinctions in the data based off of color variation. Many of these tools fall under the machine learning category where predetermined groups of points are identified based on the software use case.

Some artificial intelligence tools aim primarily at the civil infrastructure use case where the tools will look for horizontal linear features such as curb and gutter or edges of pavement. Other packages look at vertical features and recognizing tower poles or tree variations. More macro scale software targeting the aerial LiDAR industry looks at machine learning capabilities to classify vegetation vs built structures, etc.

Autonomous Vehicle Artificial Intelligence

Autonomous Vehicle

So with all this expensive reality capture hardware with accompanying expensive software, why are users looking to spend even more on solutions that take this data and essentially remove 90% or more of it? With the heaviness of point cloud data, the hardware needed to even navigate such data sets can be bulky and equally heavy. Next comes software able to consume such data on a well-equipped machine with enough storage and graphics performance to navigate through the data without any issues. Here is where some solutions have proven useful. They decimate the data either manually or adaptively depending on the level of detail needed by the user.

Next comes the extraction of this data and the multiple techniques to do that. Basic extraction tools began with the hardware manufacturer and offered basic node snapping. Many of these tools just look to snap to specific points on the data set and then manually snap to create features for design or as built documentation. Additional iterations began to expand by looking for recognizable planes and corner intersections, but still offered a manual process. This can be a laborious process and what these extraction tools promise is the automatic snapping of points and determination of which points are which.

Where these tools look to go next is towards the “automatic” classification and recognition of these features. I put the word, automatic, in quotes because many solutions are not fully automatic and many of these automatic processes still require some form of quality assessment and cleanup. Several different artificial intelligence approaches are aiming at the same goal. Some options look at recognizing different point clusters as described earlier. A new method has been looking into utilizing machine learning algorithms within the photographs taken from the point of capture when the data is created, whether that is from photogrammetry software or incredible resolution high dynamic range photos captured with terrestrial scanning. Even color differentiation has been utilized when combing through already processed point cloud data.

Whether this lean towards identification and classification has been pushed to the forefront because of the reality capture industry or because of the interest in the automotive industry is a debate for another article, as well as is the autonomous car movement a subset of the reality capture industry now because of this new technology adaptation.  The autonomous car movement has been utilizing LiDAR sensors to classify captured data and relay it to the vehicle for classification, i.e. is the object around the vehicle a tree or a moving pedestrian and then back to the vehicle for a predetermined action. This is on top of the debate of whether the data should even be captured using LiDAR or traditional photographs, but regardless, the extraction tools of this data have proven incredibly valuable to consumers of heavy point cloud data sets.

So if a picture is worth 1,000 words and a point cloud is worth 1,000,000, how much is the extracted data from said point cloud? The extraction is primarily only a fraction of the data ranging around 10% to even less at times. This point cloud extraction could be considered the TLDR (Too Long; Didn’t Read) version of the data where it has sifted and sorted through to deliver the essential data needed for the project.

Although the data may only be 10% of what was captured, that does not mean it is worth 10% of the raw points. In fact some would say it is worth even more being able to take a predominately difficult data set and condense it into something easily consumed. That’s where artificial intelligence comes in. With the TLDR version of the point cloud, time is saved working on an unedited heavy dataset as well as creating an easier workflow down the line of a project. In the words of Pascal, “The present letter is a very long one, simply because I had no leisure to make it shorter.” The ability of these solutions to create a concise data set for the user proves incredibly valuable in making design decisions and maintaining a successful project. Artificial intelligence is coming.

Tony Sabat, SSOE Group


Note – If you liked this post click here to stay informed of all of the 3D laser scanning, UAS, autonomous vehicle and Lidar News. If you have an informative 3D video that you would like us to promote, please forward to


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.