UAV Based Accident and Crime Scene Investigation


This experimental project was organized by the Royal Canadian Mounted Police (RCMP) and Pix4D to investigate a proposed UAV-based protocol for accident and crime scene investigations. Comparing results with traditional methods (measuring tape, laser scanner) would show the accuracy and reliability of the achieved reconstruction results so that they can eventually be used as admitted evidence in court.

Two data sets of a made-up crime scene were acquired with quadcopters from Aeryon Labs (225 images) and Draganfly (212 images). The ground sampling distance was less than 1 cm in order not to miss any details. The full flight took less than thirty minutes including the pre-flight preparation. Eight yellow evidence markers were placed around the collision scene, indicating the location where all evidence was found.

See the densified 3D point cloud on platform embedded below:

View HD version on Google Chrome, Mozilla Firefox and Safari browsers supported)

In order to improve the global accuracy of the final results, several points were measured with kinetic GPS and total station. These points were picked from corners of the vehicles, the feature objects, and the evidence markers. They were imported into the software and used either as ground control points, manual tie points or check points. In addition, a terrestrial laser scanner was set up in several locations to scan over the entire scene to be used for quality assessment of the UAV results.

Pix4Dmapper’s total processing time was approximately 2 hours on a laptop with a core i7 and 8GB RAM. A densified point cloud, digital surface model (DSM) and orthomosaic were generated.

The reconstructed results either exactly match or are within one centimeter accuracy when compared with traditional methods. Detailed comparisons of results can be found in the White Paper (

The project results show that UAV-based solutions not only save the field measuring work but also provide LiDAR-like accuracies with more visible details which can be admitted as evidence in court.




  • Dr. Michael Olsen

    This is a very interesting article and application. The point cloud viewer is impressive. While there is no question that UAV-based SFM (Structure from Motion) is an exciting technology that can achieve excellent results, I am very concerned with some of the points raised in the white paper referred to in this article that misrepresent the technology. I think that it is very important to raise these points so that people are not oversold on the technology’s capabilities only to be disappointed when actually using the technology should the results not be as good as advertised.
    First, the comparative images (White paper: Page 5 middle) showing the side by side of laser scanning/lidar next to the SFM model does not correctly show achievable lidar data. In the lidar image, it is very difficult to see the detail. However, the lidar unit used is capable of generating a point cloud much more similar to the SFM model (including photographic RGB values). I have 10+ years’ experience performing terrestrial lidar work and even some of the first laser scanners would produce better point clouds than what is shown in the image.

    In the image, lidar data from the ground and the body are completely missing. The figure just appears to be a wide cross section of the data. The points on the ground are clearly present in the actual data since later on the white paper presents an image of the measurements taken from the lidar data (Page 5 bottom). This second image was not labeled as lidar data, but if you look closely the measurements in the image match those of the lidar data. Also, the scan pattern can be seen in the point cloud (SFM point clouds look very different from lidar scans, which are acquired on fixed angular increments).

    It is true that SFM can produce a more complete point cloud and you would expect a better looking point cloud. However, by not properly displaying the lidar data, it is hard to evaluate the capabilities. Craig Glennie at the University of Houston published an article in lidar news previously on a similar application using lidar with much more realistic results:

    Second, the measurement comparison is also misleading. The measurements for the lidar data were done from the point cloud and the measurements from pix4d were done from the images. If one did the measurements from the photographs co-acquired with the scans, the results would be improved for the lidar. If one did measurements in the sfm point cloud, the measurements would be much poorer. (The viewer in the Lidar News article clearly shows a significant amount of fuzz and artifacts in the sfm point cloud). Edges, which were used as the point of reference, are much more clearly defined in the imagery rather than the point cloud. This is something worth educating readers about.

    Additionally, every measurement has some error associated with it and is not “exact” as the white paper claims (as well as many in the lidar/sfm industries do). Nobody wants error, but error is a reality with any sort of measurement and it should not be dismissed, but presented and addressed in a straightforward manner. Ideally, uncertainty estimates should be provided. This could be achieved by performing the measurements multiple times to see how repeatable they actually are. From my experience in evaluating results from both techniques, SFM tends to produce more complete models at higher resolutions while lidar produces a more accurate, cleaner point clouds with far fewer artifacts (particularly when you have less texture on the surface).

    Finally, a broader discussion point beyond the white paper is the fact that these data from forensic investigations will likely be used in court and there are legal implications associated with collecting and interpreting the data. Lidar offers an advantage in a strong theoretical model as well as rigorous calibration for measurements. There is still a lot of “magic” with sfm that could lead to some interesting discussions on the legal implications of accepting that for high accuracy measurements. (One could do their due diligence by having validation targets or similar to reduce this uncertainty). I think with time the results will be more consistent, trustworthy, and understood. However, this is something worth thinking about and addressing.

    SFM clearly has many advantages and I am continually amazed by achievable results with the technology and software. (In fact, we just worked with SFM in one of my 3D laser scanning and imaging courses). I in no way want to imply that the technology is not capable of great things and offers a better choice than lidar for some applications. However, I think that it is important to clearly present the capabilities of technologies when making a comparison so that it better represents what is actually achievable. I would encourage the authors of the white paper to consider these comments and address them.

  • Dear Dr. Olsen,

    First of all, we would really like to thank you for the extremely valuable comments. It also makes us re-consider the way we communicate.

    Concerning the data we have, the photogrammetric point cloud was generated by Pix4Dmapper, and the LiDAR point cloud was the only output received from our project partners. We did not perform the terrestrial LiDAR scans, and that’s the reason why the LiDAR measurements was not done with co-acquired images.

    Indeed, with proper set-up and operation of the scanner, much better results can be obtained; however, the main idea and concern about this project was the time and location limitation, as well as how familiar the personnel has to be trained to perform perfect LiDAR scans.

    Regarding the comparison, here are the views we would like to clarify:
    1. The time is absolutely limited for a real on-site case (sometimes the evidence gets washed out by rain quickly), it would not always be possible to get scans from perfect scan spots which will eventually cover all focus regions.
    2. The scanner was placed completely outside the area circled by the evidence markers. In real cases, the evidence may have spread apart and it would be impossible to place scanners close to the focus area even though better results could be achieved.
    3. For such experimental project, the scene was set-up in a plain region while in real cases, many accidents occurred in hilly areas where it might be either nearly impossible to take a good scan or even more occlusion of the points.

    Similar to that, photogrammetric technology also suffers from having dense and accurate points in dense forest area. We are still working hard to improve on minimizing the noise. For the LiDAR point cloud, the body was not completely missing (though still relatively a big part), but it does look worse in the low-res snapshot (as in the white paper) than here on the screen; however, so does the photogrammetric points.

    In the paper, we were comparing the measurement values from point cloud of LiDAR and Pix4Dmapper. It is true that most of the time, LiDAR could be more accurate in distance measurement, which is also the reason why we chose LiDAR points to compare with. In the white paper, we mentioned that the terrestrial LiDAR points were used for quality assessment for the UAV points, which indicated the truth that we value the accuracy of this.
    There might be some not-so-fair comparisons for LiDAR specialists since we were not as professional in dealing with LiDAR data. When people perform measurements in LiDAR point cloud, do they also always need to annotate the co-acquired images as well? It would be awesome if we could hear more from you concerning the processing and measurements in LiDAR data. A lot of times we are comparing the two point clouds based on solely the point cloud files and our software (which are the resources we have), but we would really like to know more about how we could also get better results with LiDAR, in which case that we will be able to provide more neutral comparisons in out marking materials.

    The measurement was done only once in Pix4Dmapper, but with all related images annotated in order to get the best measurements. We did try to get the measurement as accurate as possible (to annotate almost all images on which the points are visible) but we did not try to mislead our users. It could have happened to be 1 or 2 centimeter different in the measurement too, but this would still be considered within an acceptable range.
    It is true that LiDAR points look cleaner, which readers can clearly see also from the paper. The main focus of the paper was not to exclude the usage of LiDAR technology and the results it generates; it was to propose a new method which may better fits the forensic needs at the current moment. We consider it more like a trade-off rather than saying that one technology is better than the other in all aspects. It could be that for some cases LiDAR is more appropriate while for other cases UAV works better. The police will always have the options to choose whichever fits best in each individual case.

    Last thing we would like to mention is that though with still more noises than LiDAR, the photogrammetric point cloud displayed on the website does not fully represent the point cloud generated. In some cases, the number of point cloud could have been reduced and the way of display may also play an important role on visual experiences.

    Please feel free to let us know if there is any question.
    Thanks again for the kind share and we are always looking for more feedback which makes us improve!

Leave a Reply

Your email address will not be published. Required fields are marked *