
Barry has been mapping and measuring the planet for 45 years. For over 28 years, he’s been building what we now call Visual Asset Management or Digital Twins.
Introduction
The geospatial world is undergoing a transformation. As sensors become more precise, point clouds more detailed, and 360° imagery more immersive, we find ourselves surrounded by spatial data of unprecedented richness. But amid all this technological progress lies a fundamental challenge: most of this data remains inaccessible, underused, and misunderstood by those who need it most.
In this deeply thoughtful essay, Barry Bassnett—a veteran of nearly five decades in surveying and photogrammetry—argues that findability, not just accuracy or resolution, is the true differentiator in modern geospatial workflows. He explores the limitations of conventional point clouds, the untapped power of visual twins, and the promise of open standards like USD. Most critically, he urges us to democratize spatial understanding by making our data more visual, more intuitive, and radically more accessible.
The Power of Spatial Understanding: Making Data Accessible to Everyone
“Surveying isn’t just about surveyors, right? It’s about the power of spatial understanding for everyone.” I recently shared this thought on LinkedIn, and the enthusiastic response reinforced my belief: emerging technologies are democratising 3D spatial data, making it useful far beyond geospatial specialists.
Too often, we’ve reduced rich, multi-dimensional data to flat CAD lines. While accurate, these formats can be inaccessible to stakeholders like project managers, planners, environmental consultants, conservators, or even the general public—people who need spatial understanding but not complex software. So why aren’t we delivering intuitive, visual interfaces that make sense to them?
This is where modern solutions stand out.
The Atlas Problem: Why Spatial Data Gets Lost
I’ve often used this analogy: “A point cloud is like a beautifully bound atlas. It looks great, but without an index, it remains largely useless.” Similarly, a spatial data platform that lacks the essential ability to tag and categorise information is equally limited. This inherent limitation is precisely the challenge of findability.

This is my core frustration with so much of the fantastic data we capture. We get these incredibly detailed point clouds—a visual feast!—But then you’re left staring at millions of points, trying to figure out where the information you need is located.
The problem, in my view, is twofold, and it plagues even the experts:
- Raw point clouds are hard to navigate. Identifying a single pipe or beam is time-consuming, even for experts.
- Meshes are harder to tag. They look great but form a continuous surface, making it difficult to segment and label distinct components.
The solution? Tagging systems that empower users to create custom queries and context-aware indexes.
Beyond Points: Visual Twins for Real-World Insight
This leads us to a fundamental question: Why do you need a point cloud for every application? Are we designing essential workflows, or are we simply defaulting to the highest precision because it’s what we’re accustomed to producing?
Many applications don’t need centimetre-level accuracy. Visual context—like 360 imagery or Gaussian Splats—can be far more useful and powerful.
Point clouds provide geometry. Visual data adds usability. Combine both, and you get the holistic visual twin.
Scan a building with mobile lidar. Capture high-res 360 panoramas. Click a 3D point → Jump into the 360 image. This fusion of accuracy and immersion supports BIM, facilities management, environmental monitoring, and heritage conservation alike.
The required accuracy depends on the application. A facilities manager might care more about visual context and maintenance logs, while a conservator needs both precision and visual history. Intelligent visual twins allow for diverse data captures, each “fit-for-purpose.”
A truly comprehensive visual twin becomes a central hub for all relevant project information. This means linking in:
- CAD drawings: The original design specifications.
- GIS layers: Land ownership, environmental data, utility networks.
- Legacy plans and blueprints: Historical context for older structures.
- Maintenance logs and reports: Records of past repairs, scheduled inspections, and asset warranties.
- IoT sensor data: Real-time information on temperature, humidity, and structural stress.
- Inspection notes and photos: Detailed records from field visits.
- Condition records or conservation reports: For heritage sites, crucial documents that record findings or interventions.
- Geotagged Imagery: Photos or videos embedded with precise location (where), capture time (when), and descriptive information (what content) about the scene or objects within it. This geotagged imagery can pinpoint the exact position of an anomaly, show its condition at a specific moment, and include notes or audio descriptions from the inspector. This elevates simple media into highly valuable, searchable, and spatially referenced data points within the visual twin, dramatically boosting their findability.
The goal is to move beyond simply generating data to creating a live, interactive data ecosystem. The “digital twin” isn’t just a static 3D model; it’s a dynamic, searchable resource where every piece of information is spatially referenced and immediately accessible through an intuitive interface, ensuring high findability.
Building the Index: Tags Across Data Types
This is where my idea of user-defined indexing shines, especially with integrated data. It fundamentally shifts the power of querying and finding information directly into the hands of the user, whether they are a seasoned surveyor or someone new to spatial data, making the data inherently more findable.
User-defined tags are a game-changer. With modern platforms, you can:
- Tag “Fire Safety Point” or “Pipe – Storm Drain” in both point cloud and 360 image
- Link a 360 point to a pump’s maintenance log
- Track decay on a stone arch over time using geotagged imagery
Rich metadata elevates photos and videos into spatially referenced, searchable assets. The old file-folder system is obsolete.
This empowers users to build custom queries and role-specific views derived from a shared spatial canvas. Tags let each stakeholder extract just the information relevant to them.
The Golden Thread: A Critical Application
After tragedies like the Grenfell fire, it’s clear: we need a “Golden Thread of Information”—a continuous, accessible record of design, maintenance, and materials. It’s about ensuring absolute findability of vital safety data.
A visual twin, with tags and time-stamps, gives fire crews or building managers immediate access to critical data:
- Where a material is
- When it was installed
- What it’s made of and how it’s performing
This isn’t just efficient—it saves lives.
“Wait a minute, couldn’t we just use GIS for that?”
That’s a question I often hear, and it’s a fair one. GIS is powerful, however, here’s the rub: the very power of traditional GIS often creates what I call a “skill silo.” Most stakeholders don’t have GIS training. They need intuitive visual tools, not complex analysis suites.
This is where the new generation of platforms differentiates itself. Next-gen platforms build on GIS power but present it through user-friendly, visual, and searchable experiences. They make spatial data universally accessible, driving towards optimal findability.
The Role of AI: Breaking Down Barriers of Taxonomy and Language
AI is rapidly evolving to identify, categorize, and classify features within vast point clouds and continuous 360 imagery streams automatically. This means AI can help create a foundational taxonomy that isn’t reliant on manual tagging for every single element, significantly enhancing findability.
What one local council calls a “manhole cover,” another might refer to as a “utility access point,” and a foreign contractor might use a completely different term. AI, once trained, can recognize these variations and unify them under a consistent, understandable tag, creating the vital “index” for our digital atlases, thereby improving their findability.
Open USD: The “Fair Wind” That Could Change Everything
This brings me to Universal Scene Description (USD). If we’re talking about a genuinely open, interconnected, and accessible future for visual twins, then USD, given a fair wind of adoption and development, could fundamentally change everything.
What is USD? It’s an open-source, extensible framework developed by Pixar for robustly describing, composing, simulating, and collaborating on large-scale 3D scenes. Think of it as a universal language for 3D data, designed to facilitate efficient collaboration and interoperability across different software applications and pipelines.
Here’s why I believe it’s such a game-changer for our spatial data aspirations:
- True Interoperability: Today, transferring data from a LiDAR scanner into a 3D modelling package, then into a visualisation engine, and finally onto a web platform can be a complex and lossy process. USD aims to be the neutral ground, allowing data created in one system to be accurately read, composed, and even edited in another, breaking down proprietary silos. This inherently improves data findability across disparate systems.
- Compositional Power: USD isn’t just a file format; it’s a scene description language. This means it can compose multiple layers of data from different sources (point clouds, meshes, 360 images, CAD, GIS attributes, sensor data, AI-generated tags, and crucially, the geospatial and temporal metadata from geotagged imagery) into a single, cohesive virtual environment. Teams can work on different aspects of the visual twin simultaneously without overwriting each other’s work, all contributing to a centralised and more findable information hub.
- Scalability: It’s built to handle immense datasets, crucial for the massive point clouds and high-resolution imagery we generate today.
- Extensibility is key to our “tagging” vision. USD allows for custom metadata and attributes to be attached to any object within the scene. This means those user-defined tags – whether ‘Fire Safety Point’ or ‘Roman Arch Damage’ – can be embedded directly within the universal format itself, making them portable and searchable across any USD-compatible platform, along with the precise “where, when, and what” information from any linked photographs or movies, significantly boosting their findability.
- Streaming and Real-time Capabilities: For accessible web-based visual twins, efficient streaming and real-time rendering are paramount. USD is designed with these considerations in mind, making it easier to serve complex 3D environments to web browsers or mobile devices, further facilitating the findability of information.
In essence, USD and USDZ provide the foundational “glue” that enables our diverse data, AI-powered classifications, and, crucially, our user-defined tags, to coalesce into truly accessible, intelligent, and interactive visual twins. It makes that “atlas with an index” not just innovative, but also universal, collaborative, and future-proof, all of which contribute to superior findability.
The Future of Spatial Data: Universal Access, Universal Tags – and Ultimate Findability
The most valuable geospatial data is the most usable. It’s in how we make it accessible, visual, and truly useful to everyone. As I always say, “One man’s heritage is another man’s asset management issue.” The detailed 3D capture required for a historic building is precisely the same level of detail a facilities manager needs to efficiently run that building today, including tagging critical maintenance points or areas that require specific attention, and leveraging geotagged imagery for a complete and verifiable record. The ability to ensure a Golden Thread of information throughout a building’s life cycle is a paramount application of these technologies, enhancing safety and accountability through unprecedented findability.

The Future Belongs to Those Who Master Usability and Findability
As I reflect on the journey of spatial data – from raw points in analog notebooks to comprehensive visual twins – a clear truth emerges: the future of our industry isn’t just about how we capture data, but how effectively we make it usable and accessible to everyone, ensuring robust findability. The days of complex, siloed data understood only by a few specialists are, and must be, drawing to a close.
This is why integrating diverse data—like point clouds, 360° imagery, and legacy documents—combined with AI-driven classification and intelligent tagging—is so powerful.
Here’s the bottom line: The people and organizations who can truly master this skill – the art of transforming raw spatial data into intuitive, actionable, and universally accessible visual intelligence, with a focus on findability – are the ones who will thrive and truly lead in the years to come. To quote an old well-worn phrase, it’s about working smarter, not just harder.
Does this mean that the key skill every surveyor will need to master in the future is photography, not just taking pictures, but understanding how to use the camera sensor to capture rich, spatially relevant visual data that is inherently findable, for both static and video imagery?
I think it has to be. That, to me, is where the real profit and progress lie.
About the Author
Barry Bassnett, Liverpool, UK, June 2025
My passion now lies in coaching, teaching, and writing about photogrammetry and spatial imaging. My upcoming book, Pixel to Place – Knowledge and Image Engineering, is planned for launch at InterGeo 2025—if I can stop adding to it long enough to publish!
Images courtesy of Soarvo and RICHPiX.
















