Standards are the Key to Harnessing Technology
Shane MacLaughlin, Managing Director at Atlas Computers in Ireland provides an in-depth look into the need for open standards.
Topographic Survey Specification for Urban Projects

Shane MacLaughlin
Some years ago I chaired a working group hosted by Dublin City Council tasked with developing a robust specification for urban topographic survey projects. This working group came into being at the behest of the city council QBN department who were experiencing severe difficulties in getting consistent survey deliverables, in terms of content, quality and accuracy, across a number of survey providers in a competitive market.
The working group comprised of members representing survey providers, survey consumers, survey institutions and survey software developers. The goals were to develop a specification that delivered the following:
- Consistent results, independent of survey provider and equipment used, in terms of digital structure, cartography and reporting across a range of client packages being used. (In this case Microstation, MX Roads and AutoCAD)
- Testable and transparent quality, in terms of accuracy and completeness, for cartography, DTM and section products and all supporting reports.
- Support for multiple grids with minimal additional cost
- To be readily achievable in a cost effective manner by the survey provider
After a couple of years and a number of iterations the Topographic Survey Specification for Urban Projects[1] was published in 2009 and has been in use since to good effect. In order for the specification to work, we provided vendor specific implementations of feature libraries and code lists for different instruments. We also created and ran training courses to bring the surveyors up to speed with what was required and how to achieve it.
One of the more important aspects of this specification is the provision for independent check surveys to verify that any given survey fully complied with the specification. Previous experience showed that specifications and standards are of little value unless they’re rigorously enforced, and that independent checks remain the best mechanism to do this. While this adds a small additional cost to survey procurement without immediate visible benefit, it has proven invaluable in the longer term and was key to success in this case.
Peter Muller of Dublin City Council had check surveys carried out on work by all survey providers and returned any work that failed to meet specification for any reason. While this caused some consternation among the survey providers in the short term, it led to a significant rise in quality in the medium term, such that it became exceptional for a survey to fail a check.
OpenLSEF – New Standards
I hadn’t given much thought to the Dublin specification over the last couple of years, having become embroiled in emerging technologies in the survey industry such as scanners, UAVs and mobile LIDAR, until I came across the OpenLSEF[2] recently while browsing the Laser Scanning Forum[3].
The stated purpose of the OpenLSEF initiative is “to create a common language describing how features in 3D point clouds should be defined”. Given a large part of my work involves developing tools to automatically extract meaningful data from point clouds, I registered, was contacted by Gene Roe and found that the survey standards and QA were again becoming a hot topic.
Some of the work already developed by members of the OpenLSEF is invaluable for those involved in working with this type of data, including NCHRP Guidelines for the Use of Mobile LIDAR in Transportation Applications[4] and development the ASTM E57[5] format.
While the Dublin specification covers a number of current requirements, there have been major advancements in survey technology over the last decade. In the field we’re seeing large scale adoption of mass data collection systems. These include static scanning, mobile LIDAR, SLAM, UAVs, terrestrial photogrammetry and side-scan sonar, all of which can result in massive datasets of varying accuracy and content.
Data Challenges
We’ve quickly moved from taking a number of seconds to observe a point selected by the surveyor to having devices that can collect anything from tens of thousands to millions of arbitrary points per second. While this has provided some fantastic opportunities it has also brought many attendant problems. These include dealing with new types of data such as point clouds and geo-referenced images and video, dealing with huge datasets in new formats, understanding the underlying potential quality issues with those datasets, and extracting the required model data while ensuring that required quality standards have been met. Within point clouds there are numerous potential sources of error which vary based on the equipment and techniques used. Some of these include the following:
Weak control – Like a traditional survey many point cloud datasets, such as static scans and UAV photogrammetry, rely on ground control. While techniques such as scan to scan registration can provide alternatives in some scenarios, in larger linear jobs traditional ground control is still needed and any errors here will be reflected in the data. Other measurement methods such as mobile LIDAR and SLAM incorporate different mechanisms to transform and align range information onto a local grid, such as IMUs and GPS, which may also include errors. As with any survey, the data needs to be checked, and with large point cloud datasets this needs to be done prior to commencing time consuming model extraction work.
Missing data – Scanners, LIDAR and photogrammetry all suffer from having scan shadows or significant gaps in the data where there is an object between the sensor and the target. For example, a car in the way of a kerb line when scanning, or an overhanging building eave in a UAV survey. Low point density where scan setups are too far apart can also be an issue.
Erroneous data – Items that get included in the point cloud that are not part of what is being surveyed, such as vehicles and pedestrians and vegetation that obscures the underlying ground.
Low point accuracy – Accuracy of individual points varies significantly between devices, with ranges currently between about 3mm to 30mm for survey grade devices. Noise is obviously visible when looking at thin slice of scanned data taken from a flat surface. It can be less obvious with UAV data where incorrectly estimated lens parameters can lead to systematic errors. Not every device is suitable for every survey application.
Quality Control
These are just a small sample of the many potential problems someone working with point cloud data is liable to encounter on a regular basis, which in turn can lead to lost time and money, and a potentially substandard result passed on to the client. In order to minimise this risk, it is necessary to develop workflows and quality control checks when collecting and processing this data to ensure the results are fit for purpose.
As with any survey, one of the best and simplest methods is to collect extra redundant data for testing purposes. For mobile LIDAR, this includes multiple passes of the area being surveyed which will also fill in gaps. For static scanning, having sufficient overlap across multiple setups allows consistency to be checked between setups. Having check points collected using an independent method of measurement provides stronger QA and may also provide a route to correcting errors in the point cloud data, particularly where vegetation is an issue.
In the Dublin specification we addressed similar issues by including a QA check list as one of the documents delivered as part of the survey, where each QA item on the list had to be ticked and signed off by the survey provider. We also included provision for independent check surveys and audit of the delivered data by the client with penalties for delivery of out of specification work.
Point cloud data typically looks very attractive, but this says little about its accuracy, or the accuracy of products derived from it. In a competitive environment where the lowest cost usually wins the work, there is pressure to reduce costs which can have an adverse affect on quality. In this context rigorous and transparent quality control is a must. For the surveyor, this means building quality checks into each workflow, such that the best result is achieved without lost time or money. Initiatives such as OpenLSEF provide a fantastic resource in terms of developing these workflows and controls.

Points taken on a motorway at a million points per second leading to multi-billion point models
In addition to dealing with all this new survey collection technology, the range of products that the surveyor has to provide has increased as we move towards BIM for infrastructure. Following contact with Richard Groom, I gather there is some great work being carried out here by the Survey4BIM[6] group, who are in the process of establishing feature naming conventions for BIM for surveyors in the UK.
Ideally, I think it would be hugely beneficial to describe, with examples, how each feature listed is represented in the major data exchange formats needed by survey consumers to suit their preferred packages. While this was done to some extent in the Dublin specification, in terms of feature, string, layer and level names, along with cartographic representation, it was very much oriented to CAD.
BIM
In the context of BIM, we additionally need descriptions of how discrete features, linear features and surfaces are represented in open source published formats such as IFC for BIM and LandXML for engineering packages. We also need metadata, such as who created this data, when, using which equipment, and supported by which QA documents. As an illustration of why this is needed, a question I’ve had asked on more than one occasion in recent years by SCC users is ‘how do we export our scanned road survey to REVIT?’
So I duly put together a short video and wrote an article[7] explaining how to do this, which at face value was problem solved. Except that it wasn’t really, as while the pictures on screen looked good, there isn’t actually a well defined standard for representing a road with all its attendant features and attributes in IFC that can be taken into packages such as REVIT. This is also changing, with buildingSMART International recently starting a Road project[8] to address this, and similar projects further progressed for rail and bridges.
It is worth noting that many survey and engineering firms have already solved many if not most of these issues internally and I’ve seen some fantastic results delivered based on scan and UAV data in recent years. To get the best from this technology I’m firmly of the opinion that it will be through open standards and shared experience and would strongly recommend participating in the groups listed below.
[1] http://www.atlas-files.com/WorkingGroup/QBN%20Spec.pdf
[2] https://beta.openlsef.org/
[3] https://laserscanningforum.com
[4] http://www.trb.org/Main/Blurbs/169111.aspx
[6] https://survey4bim.wordpress.com/
[7] https://www.linkedin.com/pulse/scan-bim-ground-models-scc-r12-shane-maclaughlin/
[8] https://www.buildingsmart.org/your-industry-needs-you/