Open Geospatial Consortium https://www.ogc.org The Home of Location Technology Innovation and Collaboration Fri, 13 Mar 2026 06:02:47 +0000 en-US hourly 1 https://www.ogc.org/wp-content/uploads/2026/02/OGC-new-fevicon.svg Open Geospatial Consortium https://www.ogc.org 32 32 Turning Hidden Risk into Visible Data: Mapping Underground Infrastructure for Cities https://www.ogc.org/blog-article/underground-infrastructure-mapping-muddi-standard/ https://www.ogc.org/blog-article/underground-infrastructure-mapping-muddi-standard/#respond Fri, 13 Mar 2026 06:02:47 +0000 https://www.ogc.org/?p=34461 Inspired by lessons from New York City’s 9/11 recovery, the MUDDI standard developed through OGC collaboration is helping cities better map and manage underground infrastructure.

When the World Trade Center collapsed on September 11, 2001, the devastation above ground was visible to all, but an equally perilous crisis was unfolding beneath the surface. Fires raged deep underground, threatening critical utilities and transportation tunnels, and making it dangerous for firefighters and recovery crews to even approach. Hidden below the debris lay a 200,000-pound tank of liquified freon, part of the towers’ air conditioning system. If the heat reached it, the gas could have exploded or released deadly phosgene fumes. Yet no one had a complete, accurate picture of what lay beneath the site. Each agency and utility held fragments of incompatible data, and it took more than ten days to piece together a coherent underground map of the damaged infrastructure. When the freon tank was finally located, firefighters were able to douse the surrounding area, averting another catastrophe in a city already in crisis.

The experience also exposed a fundamental challenge that cities around the world continue to face: fragmented and incomplete information about the infrastructure hidden beneath their streets.

That harrowing crisis revealed a truth that still resonates: cities cannot plan, build, or recover effectively if they lack a clear understanding of their subsurface environment. For urban planners, the networks of pipes, cables, tunnels, and geological layers hidden below ground are as vital to resilience and sustainability as the infrastructure visible above it.

From crisis to collaboration

In the mid-1990s, New York City began developing a photogrammetric basemap that would serve as the foundation for all municipal geospatial data. The Department of Environmental Protection and other agencies used it to build compatible layers for water, sewer, and surface infrastructure. When 9/11 occurred, the city had a partially functioning enterprise GIS, but much of its underground infrastructure was still unmapped.

In the years that followed, planners, engineers, and geospatial experts worked intensively to close the data gaps that had hindered the city’s emergency response and complicated construction, excavation, and maintenance. While the city’s enterprise GIS expanded to include more than 1,000 layers, progress on underground utilities lagged. Private utilities digitized their networks independently, without adopting the city’s basemap or shared standards. When Hurricane Sandy hit in 2012, storm surges flooded coastal areas and caused tens of billions of dollars in damage—some of which might have been mitigated with better underground data and coordination.

Building the framework: MUDDI

More than two decades later, the lessons from New York have helped shape a new generation of geospatial data standards being applied worldwide, from U.S. cities to national initiatives including the American Society of Civil Engineers (ASCE) standards for underground infrastructure (ASCE 38 and 75); the European INSPIRE data models which serve as the basis for underground utility mapping systems in Flanders, Denmark, Scotland, and the Netherlands; and the development by the Open Geospatial Consortium (OGC) of the Model for Underground Data Definition and Integration (MUDDI), which serves as the basis for the United Kingdom’s recently initiated National Underground Asset Register (NUAR).

Conceived by experts from New York City and the U.S., the U.K., Singapore, Belgium, Canada, and Denmark, MUDDI builds on best practices from established and proven geospatial utility data models, including ASCE 38 (Subsurface Utility Engineering – SUE), ASCE 75 (As-Built), the European Commission’s INSPIRE Utility and Government Services data specification, as well as the Network Utility Application Domain Extension for OGC’s CityGML standard. Together, these provide a foundation for accurate 3D underground mapping, enabling the geometry, attributes, and relationships between subsurface elements to be integrated across utility networks.

The goal of MUDDI is to create a common language and structure for underground data—making it possible to align utilities, geology, and surface infrastructure across jurisdictions. Over time, it has evolved into an overarching framework capable of improving compatibility among national and regional models and supporting a wide variety of use cases, from construction coordination to emergency management.

New York City’s next chapter

Even after 9/11 and Hurricane Sandy, New York City hesitated to launch a city-wide integration program. Concerns over data security, costs, liability, and loss of control slowed collaboration between private utilities and city agencies. But global progress, including the launch of the UK’s NUAR, helped demonstrate what was possible.

In November 2025, New York City announced the 3D Underground (3DU) initiative—a $10 million program funded through U.S. Department of Housing and Urban Development disaster recovery grants—to develop a secure, shared 3D model of the city’s underground utilities and geology. The project brings together city agencies, utilities, and the State Public Service Commission, with Columbia University digitizing more than 20,000 boring records to model the city’s geology, recognizing that underground utilities and geology are deeply intertwined.

This marks a significant shift for the city, moving from fragmented utility datasets toward a shared, interoperable framework inspired by global best practices.

Why underground data matters for planning

For planners, MUDDI opens new ways to manage the “city beneath the city.” Traditionally, each utility or public agency maintained its own underground records, often incomplete, outdated, and incompatible. This lack of coordination led to costly excavation strikes, construction delays, and dangerous uncertainty in emergencies.

By providing a shared framework, MUDDI allows underground data to be assembled and visualized in 2D as well as 3D across systems. For planners, this means they can:

  • Coordinate development by understanding where existing infrastructure lies and where capacity exists for growth.
  • Improve capital project design by identifying potential utility conflicts before construction begins.
  • Reduce accidental utility strikes, especially those involving fuel or electric transmission lines that can cause fires or explosions.
  • Support disaster planning and response by ensuring emergency managers have rapid access to accurate, consolidated underground data.
  • Link with digital twins and Building Information Modelling (BIM) to create an integrated view of the built and natural environment.

For example, a city planning a new transit line or rezoning a densely built-up corridor can use MUDDI-based data to assess the entire subsurface before breaking ground—preventing a multimillion-dollar construction project from being halted by an unmapped fiber-optic cable.

More comprehensive digital maps also enable AI-powered analysis to identify maintenance needs and optimize construction coordination. Members of the OGC MUDDI Standards Working Group estimate that improved underground data could reduce capital and maintenance expenditures by at least five percent—saving billions across U.S. cities.

The MUDDI Environmental Subcommittee is exploring how surface flooding interacts with underground systems. By mapping how stormwater enters and damages basements, tunnels, and utility pipes and conduits, planners can better anticipate risks and design more resilient infrastructure—whether in New York, or in flood-prone regions like Asheville, North Carolina, and Kerr County, Texas.

New York and beyond

While the U.S. does not yet have a comprehensive underground utility mapping plan, the UK’s NUAR offers an important global case study. Led by the Department for Science, Innovation and Technology, and operated as a service for public and private sector users by Ordnance Survey (Great Britain), NUAR applies MUDDI’s principles at national scale, aggregating data from hundreds of Underground Asset Owners to create a secure, standardized digital map of the UK’s buried infrastructure. The UK estimates that NUAR will save $4.5 billion over a decade through reduced utility strikes and improved coordination, while also providing instant access to data and reducing the number of organizations that must be contacted for utility locates. NUAR is speeding up the average time it takes to get hold of Underground Asset records from six days to six seconds. It shows how standards like MUDDI can move from concept to operation, translating interoperability and transparency into measurable public value.

One standard, many benefits

MUDDI’s strength lies in its flexibility and in its ambition to harmonize underground data models into a family of compatible standards. It does not replace local or national systems—it connects them. A city like New York can build a detailed map of utilities within its borders, while a program like NUAR can operate at a national scale, with both able to exchange their required data seamlessly.

As compatible layers are built across jurisdictions, U.S. regions will eventually be able to link underground networks across shared city and state boundaries. The process may take time, but we’ll get there one city, one county, one state, and one tribal nation at a time.

The way forward

For urban planners, the ground beneath our feet represents both a challenge and an opportunity. Managing it well requires seeing the city in full, above and below the surface. The MUDDI model, and projects like NUAR that build on it, demonstrate the value of shared data standards in creating safer, more resilient, and more sustainable cities.

As underground mapping expands across the U.S., a broader vision comes into view. The same standards that connect pipes and cables below ground can also link to features above it—streets, trees, traffic systems, and buildings—offering planners a unified picture of how the built and natural environments interact. This integration, supported by advances in AI and digital twins, is opening new frontiers in planning analysis and decision-making.

As populations grow and infrastructure ages, cities that can understand and manage their underground assets will be better equipped to plan confidently for the future. MUDDI’s vision, born from the lessons of 9/11 and carried forward through global collaboration, shows that even the parts of a city we cannot see can be planned, managed, and protected.

About the authors

Alan Leidner is a geospatial consultant, NYC GISMO President Emeritus, and OGC Liaison. He holds an MS in Urban Planning from Pratt Institute and worked for the NYC Department of City Planning for ten years. He led New York City’s emergency mapping efforts following 9/11 and currently serves on the U.S. National Geospatial Advisory Committee.

Carsten Rönsdorf is the Product Manager for the National Underground Asset Register at Ordnance Survey, where he has led international collaborations on underground data management and serves as co-chair of OGC’s MUDDI Standards Working Group.

Acknowledgment

The authors acknowledge the contributions of the OGC MUDDI Standard Working Group and the editors of the MUDDI Standard (OGC 23-024) including: Alan Leidner (NYC GISMO), Wendy Dorf (NYC GISMO), Andrew Hughes (British Geological Survey), Carsten Rönsdorf (Ordnance Survey Great Britain), Neil Brammall (UK Government Digital Services), Phil Meis (UMS), Dan Colby (UMS), Liesbeth Rombouts (Flemish Information Agency), Dean Hintz (Safe Software), and Joshua Lieberman (Open Geospatial Consortium). The development of MUDDI was supported by an international network of experts from New York City, the United Kingdom, Singapore, Belgium, Canada, Denmark, and other participating nations. Special thanks to Mark Reichardt, former CEO of OGC, and George Percivall, former CTO of OGC, who were instrumental in initiating the MUDDI project. Also, thanks to Mary McCormick and the Fund for the City of New York for providing initial funding for the MUDDI initiative. Additionally, thanks to the American Society of Civil Engineers (ASCE) for the development of the SUE and As-Built standards.

For more information

See the OGC MUDDI Standard (Document 23-024):
https://docs.ogc.org/is/23-024/23-024.html

]]>
https://www.ogc.org/blog-article/underground-infrastructure-mapping-muddi-standard/feed/ 0
Five Signs You’re Ready to Play a Bigger Role in the Geospatial Community https://www.ogc.org/blog-article/individual-geospatial-professional-community/ https://www.ogc.org/blog-article/individual-geospatial-professional-community/#respond Wed, 11 Mar 2026 14:13:05 +0000 https://www.ogc.org/?p=34428 Most geospatial professionals begin their careers focused on technology—learning tools, building applications, and solving technical problems. Over time, however, something begins to shift. The questions we ask become broader. Instead of focusing only on individual tools or datasets, we begin noticing how systems connect, how ideas move across communities, and how the broader geospatial ecosystem evolves.

Early in my career, I was focused mainly on the technologies and projects directly in front of me. But after working with different communities across regions and sectors, I began noticing a pattern: the most meaningful progress rarely came from isolated efforts. It happened when diverse groups came together to solve shared challenges. For many professionals, that moment marks the start of a new phase in their careers.

In my experience, there are a few signs that this shift is beginning to happen. If you recognize some of these in your own career, you may already be moving toward a broader role in the geospatial community:

1. You Start Looking Beyond Your Own Tools and Projects

Early in a geospatial career, mastering tools and workflows feels like the primary challenge. With experience, curiosity tends to expand. You start exploring how different technologies, datasets, APIs, and data models interact across systems.

Many of the most interesting challenges in geospatial systems do not sit within a single tool or application. They appear where data, infrastructure, and organizations intersect. And when that realization happens, technical skills remain important, but understanding how things connect across the ecosystem becomes just as valuable.

2. You Want to Learn from Peers Outside Your Immediate Circle

Many practitioners begin their work within a specific organization, project, or technology ecosystem. Over time, however, the value of exchanging ideas with peers from other domains becomes clearer.

Conversations with developers, engineers, researchers, public sector practitioners, and industry leaders often reveal perspectives that do not emerge within a single organizational context. Many professionals discover that some of their most valuable insights, and often their next opportunities, come through these interactions. Engaging with a wider network of peers helps expand how you see the field and how different communities approach shared challenges.

3. You Become More Skeptical of Technology Hype

Like much of the technology sector, the geospatial field regularly experiences waves of new tools, platforms, and terminology. Some developments prove transformative, while others fade over time.

With experience, many professionals begin to look beyond the excitement around the latest technology and focus on what will actually last. Conversations that explore how systems evolve over time, how data infrastructures are designed, how standards enable interoperability, and how solutions work in practice become more valuable than following the newest trend.

4. You Start Paying Closer Attention to Real-World Users

Geospatial technologies increasingly support decisions that affect society—from climate monitoring and environmental management to infrastructure planning and disaster response. As professionals gain experience, many begin to look beyond the technical design of systems and focus more on how these technologies are actually used in operational environments.

Questions about real-world needs start to matter more. Who is using the data? How reliable does the system need to be? How do tools perform under real constraints? Understanding these realities helps professionals design solutions that work not just in theory, but in practice.

5. You Want Your Ideas to Influence the Broader Ecosystem

At some point, contributing to individual projects may no longer feel sufficient. Many professionals begin looking for opportunities to participate in conversations that shape how the geospatial ecosystem evolves.

This often means engaging with communities where developers, researchers, companies, and public-sector organizations work together to solve shared challenges.

Organizations such as the Open Geospatial Consortium (OGC) help create neutral spaces for collaboration where professionals can exchange ideas, learn from peers across sectors, and collectively shape how geospatial technologies evolve.

For many professionals, engaging with such communities becomes a natural next step—from simply using geospatial technology to helping shape how it evolves.

]]>
https://www.ogc.org/blog-article/individual-geospatial-professional-community/feed/ 0
OGC Individual Membership Is Now Live https://www.ogc.org/announcement/ogc-individual-membership-is-now-live/ https://www.ogc.org/announcement/ogc-individual-membership-is-now-live/#respond Mon, 02 Mar 2026 15:42:21 +0000 https://www.ogc.org/?p=34221 We’re pleased to share that OGC Individual Membership is officially live.

This new pathway opens direct participation in OGC’s working groups, code sprints, meetings, and collaboration inside Agora — our exclusive member platform. It’s designed for engineers, developers, systems engineers, researchers, consultants, professionals, and students who want to engage personally in the work shaping interoperable systems worldwide.

To encourage early participation, we’re offering 25% off through April 30 with promo code OGC25.

Learn more about why we launched Individual Membership — and what it means for the community — in Richard Estephan’s blog.

 

Join here: https://www.ogc.org/membership/individual/

 

We look forward to welcoming new voices and perspectives into the work.

]]>
https://www.ogc.org/announcement/ogc-individual-membership-is-now-live/feed/ 0
At the Philadelphia Member Meeting, We Opened the Door Wider https://www.ogc.org/blog-article/ogc-individual-membership-launch/ https://www.ogc.org/blog-article/ogc-individual-membership-launch/#respond Mon, 02 Mar 2026 00:00:22 +0000 https://www.ogc.org/?p=33874 Today, at the Philadelphia Member Meeting, we officially launched OGC Individual Membership.

I’ve truly been looking forward to this moment.

OGC has always been powered by remarkable professionals — engineers, system architects, researchers, and developers from around the world who participate through our organizational members. The depth of expertise and global character of this community are among our greatest strengths.

Individual Membership builds on that foundation.

It creates a more direct pathway for practitioners — including independent developers, consultants, professionals and students — to participate directly in working groups, code sprints, testbeds and ongoing collaboration inside Agora, OGC’s exclusive member collaboration platform

For me, this is about connection as much as participation. Interoperability improves when more implementers share what they’re seeing, more builders surface edge cases, and more perspectives help shape the direction forward.

It also gives us an opportunity to strengthen our geographic reach.

OGC has long been global, but participation is stronger in some regions than others. For example, there is tremendous talent across Latin America, Africa, and Southeast Asia, and we want more of those voices shaping this community. Broader geographic participation makes the work more resilient and more relevant.

Alongside Individual Membership, we introduced the Industry Builders Sponsorship, providing organizations with a simple way to sponsor developers or university cohorts and help widen access even further.

This launch isn’t about changing who we are. It’s about widening the circle — technically and geographically. More practitioners. More perspectives. More places represented in the room.

If you’re ready to join as an Individual Member, you can learn more and sign up here.

To encourage early participation, we are offering 25% off Individual Membership through April 30 using promo code OGC25.

If your organization would like to sponsor developers or university cohorts through the Industry Builders Sponsorship, please contact me at re*******@*gc.org to discuss how to get involved

I’m excited to see who joins us next — and from where.

]]>
https://www.ogc.org/blog-article/ogc-individual-membership-launch/feed/ 0
From chat to map: How DGGS + Agentic AI turn geodata into verifiable decision-making bases https://www.ogc.org/blog-article/from-chat-to-map-how-dggs-agentic-ai-turn-geodata-into-verifiable-decision-making-bases/ https://www.ogc.org/blog-article/from-chat-to-map-how-dggs-agentic-ai-turn-geodata-into-verifiable-decision-making-bases/#respond Tue, 17 Feb 2026 14:25:10 +0000 https://www.ogc.org/?p=33470 Context: Why “GeoAI” fails without standards

When we say “AI + geospatial” today, many people first think of a chatbot that can “describe” maps. In practice, however, real-world applications rarely fail due to a lack of models – but rather due to a lack of interoperability, traceability, and machine readability:

  • Data is distributed across different services and formats.
  • Queries are often not “AI-enabled” (too little metadata, too few guardrails, unclear costs/granularity).
  • Results are difficult to reproduce or audit – especially in crisis situations where trust, provenance, access control, and context are crucial.

The OGC AI-DGGS pilot project for disaster management addressed precisely this issue: not “AI as a demo,” but AI as an orchestrator for interoperable geoservices – with a clear focus on standards, implementability, and real-world frictions.

What we built and demonstrated in the pilot

In the pilot, we used DGGS (Discrete Global Grid Systems) as a common “spatial language” to consistently reference and analyze heterogeneous disaster data.

Core idea

DGGS cells are to geospatial AI what tokens are to language: stable, hierarchical, machine-readable. This facilitates aggregation, multi-resolution analyses, and the interaction of different data sources.

 

Hierarchical grid for Earth data analysis

 

Architecture in a sentence

We have linked several independent DGGS data servers and several independent AI clients in such a way that they behave like an interoperable analysis engine – supplemented by a Common Operating Picture (COP) layer for context, trust, and sharing of “what applies when to whom.”

What ran together interoperably (high level)

  • Multiple DGGS/DGGRS implementations (including H3, A5, various ISEA variants, etc.)
  • Multiple server implementations (different stacks/providers)
  • Multiple AI clients/agent-based workflows (tool-based queries instead of “hallucinated coordinates”)
  • COP/OWS context further development as a transfer mechanism for situation assessment + security/trust/provenance

Why is this important?

Because it shows that “AI reasoning” in geospatial does not primarily scale through model training – but through standardized, tool-like interfaces, machine-readable metadata, and reproducible service chains.

The most important findings: Four frictions that we need to address specifically

  1. Geometric alignment friction: Datum/model differences ( H3 orthallic vs. WGS84 )
    A key practical problem was the geometric alignment between widely used systems, especially when a DGGS world (e.g., orthallic/spherical assumptions) encounters WGS84/ellipsoidal expectations. In practice, this means that if server responses do not clearly describe the underlying parameters, clients have to “calculate backwards” – and that’s when it gets dangerous.Takeaway: We need clear best practices and unambiguous parameterization so that “equally named” really means “equal.”
  2. Performance/sparsity friction: High-resolution EO data is often “sparse”
    Disaster workflows often use high-resolution sensor data – and this data is sparse: large areas without measurements, gaps in space and time, different pass geometries. This is not a marginal case, but the norm. In practice, this results in the following picture:An agentic workflow does not just execute a single query, but works iteratively: it first requests flood extent data, then population data, then infrastructure data, then higher resolution data, and finally data from several different points in time. This quickly results in hundreds or thousands of API calls, often in small tiles. The consequences are accumulating latencies, rate limits kicking in, and more and more retries until the agent gets stuck in a loop of “fetch → wait → retry” and can no longer achieve a stable result.In addition, too much data is often transferred when the client cannot precisely specify the required resolution, the exact area of investigation, or the appropriate attributes. Instead of aggregated statistics, unnecessarily large amounts of raw data are then loaded, making network and storage costs the dominant factor.The typical problem of data gaps in Earth observation and disaster data further complicates matters: clouds, satellite orbits, or missing time stamps create “data holes.” If the workflow does not recognize these gaps, it queries empty areas, misinterprets missing data as “no event,” or tries again with different parameters. This leads to additional I/O, poorer result quality, and less usable signal overall.In the pilot, we therefore specifically discussed and experimented with optimized encodings (e.g., compact cell representations, efficient backends, Parquet paths for sparse data).Takeaway: AI workflows do not die because of the model, but because of I/O. Standardized responses must be sparsity-compatible and bandwidth-conscious. In other words, the bottleneck is often not model inference, but rather that the workflow becomes too slow, too expensive, or too unstable because it has to move too much or too large data over the network/storage – and constantly encounters missing/inconsistent coverage (sparsity, “data holes”). 
  3. “Stacking paradox” / topology friction: sub-zones and overlaps at certain apertures
    In DGGS/DGGRS systems, it is often implicitly assumed that a zone at level L is completely and disjointly partitioned by its subzones at level L+1 (e.g., “aperture-7 ⇒ 7 children”). In practice, however, this assumption can break down: depending on the grid design, parameterization, and geometric edge cases, “subzones” can be defined as topological covering (cells that intersect/overlap the parent zone) rather than as exact, non-overlapping decompositions. This leads to situations where significantly more candidates than expected are returned and these candidates overlap spatially.Implementer recommendation: Clients should therefore not derive subzones via fixed cardinalities or pure ID arithmetic, but via clearly defined operations and semantics:

    • Explicitly distinguish between partition (disjoint, exact) and cover (possible overlaps).
    • Perform aggregations in such a way that overlaps do not lead to double counting (e.g., via defined weighting/intersection rules or server-side aggregation endpoints).
    • and use/supplement conformance tests that reveal precisely these edge cases (overlaps, edge cases).

    Takeaway: True interoperability requires not only IDs, but also topology and alignment rules that are implementable and testable.

  4. Registry/metadata friction: Same label, different parameters
    A recurring pattern: labels may appear similar across libraries, but differ in details (parameters, date assumptions, ID variants/indexing schemes). In addition, zone IDs are sometimes different even though they refer to the “same cells.”Takeaway: We need an authoritative OGC DGGS/DGGRS registry that clearly describes parameters, clearly separates variants, and provides cross-references. 

AI readiness: What OGC standards must now deliver

The pilot has shown that “AI-ready” does not mean “chat interface,” but rather an interface and ecosystem that reliably enable agentic use—with clear semantics, machine-readable constraints, and reproducible results.

AI readiness requires:

  1. Tool-ability: Endpoints must be described in a way that allows agents to use them robustly.
  2. Machine-readable metadata: queryables, limits, cost indicators, uncertainties, resolution/granularity.
  3. Guardrails: Protection against overfetching, incorrect resolution selection, and uncontrolled costs.
  4. Reproducibility: Queries and results must be reproducible – especially for situation assessments.
  5. Trust & security: Context, identity, provenance, access policies.

The discussion on the further development of OWS Context was particularly relevant here: OWS Context can serve as a basis for transferring a common operating picture between organizations, but it must be updated to meet today’s requirements (services/workflows, dynamic events, security/classification, AI-RAG/agent pipelines).

Benchmarks & implementability: Why “DGGRS choice” is not neutral

I highlighted an important, practical point during the implementation of the various DGGRS: DGGRS implementations do not behave identically in terms of performance and efficiency. Benchmarks and optimizations (including format/compression) were discussed in the pilot; among other things, it was pointed out that individual systems can be significantly slower/faster in certain operations.

Takeaway: Interoperability does not mean that everything is equally fast – but standards should make it possible to make capabilities, expected costs, and suitable options transparent.

Roadmap: Six concrete steps we derive from the pilot

Very specific standardization and community tasks can be derived from the pilot:

  1. OGC DGGS/DGGRS Registry: Parameterization, date references, indexing schemas, cross-reference.
  2. H3 Best Practice: clear guidance on model/data interpretation and alignment expectations.
  3. Better query mechanics: clearer queryables, more robust patterns for bounding/selection/aggregation; optional “on-ramp” for non-DGGS clients (e.g., geometry-first request that resolves to DGGS on the server side).
  4. Temporal gridding as a first-class topic: DGGS is not just “space”; disaster is always space+time.
  5. Operationalize COP + Trust/Provenance (IPT): Further develop OWS Context into a machine-readable situation picture container including security/policy/access/provenance.
  6. Analytical Extensions: clear catalog of which analytics should be available in a standardized “cell-wise” manner (aggregation, zonal stats, indices, etc.).

Invitation: How you can contribute as an implementer or member

We want to bring the pilot results to the community and turn them into prioritized, implementable building blocks.

If you are an OGC member:

  • Get involved in the Agora discussion on registry/best practice/queryables.
  • Share real-world “frictions” from your implementation (including screenshots/benchmarks, if possible).

If you are an implementer:

  • Check your parameterization against other libraries/servers.
  • Provide feedback on: “What metadata does an agent really need?”

If you help shape standards:

  • Help define conformance tests that reveal precisely these frictions.

Conclusion: “Real Friction, Real Fix”

The pilot has shown that we are close to bringing “AI for geospatial data” from the demo level to reliable, interoperable practice—but only if we standardize the real frictions: registry, alignment, sparsity-compatible encodings, machine-readable metadata, trust/context.

This is the real opportunity: standards make AI accountable.

If you want to work on registry/best practices, COP/OWS context evolution, or AI-ready metadata: Get in Touch

Appendix A

DGGS vs DGGRS

A DGGRS (Discrete Global Grid Reference System) is a complete, operational spatial reference system combining three components:

  1. DGGH (Discrete Global Grid Hierarchy): The hierarchical tessellation of Earth’s surface into zones at successive refinement levels
  2. ZIRS (Zone Identifier Reference System): A scheme for uniquely naming and addressing each zone
  3. Deterministic sub-zone ordering: A standardized sequence for organizing child zones within parent zones, enabling optimized data encoding

In essence, a DGGRS is a ready-to-use system for referencing and organizing geospatial data on a global grid, whereas a DGGS is the broader integrated software framework that may implement one or more DGGRS alongside quantization functions, query capabilities, and interoperability tools.

 

]]>
https://www.ogc.org/blog-article/from-chat-to-map-how-dggs-agentic-ai-turn-geodata-into-verifiable-decision-making-bases/feed/ 0
The World Runs on Location: Ed Parsons on Scale, Standards, and Seeing from a Distance https://www.ogc.org/blog-article/the-world-runs-on-location-ed-parsons-on-scale-standards-and-seeing-from-a-distance/ https://www.ogc.org/blog-article/the-world-runs-on-location-ed-parsons-on-scale-standards-and-seeing-from-a-distance/#respond Mon, 26 Jan 2026 15:47:46 +0000 https://www.ogc.org/?p=32071 Ed Parsons has spent his career working where technology, geography, and real-world decision-making intersect, often behind the scenes of systems that operate on a global scale. Best known for his work across digital mapping platforms and for his long engagement with open geospatial standards, he brings a rare mix of platform experience and standards governance to his role as Chair of the Open Geospatial Consortium (OGC) Board of Directors. At a moment when geospatial data is increasingly entwined with AI, real-time systems, and climate risk, Parsons reflects on what it takes to build technology that works across cultures, institutions, and everyday life.

Most people use digital maps and location-based services every day without giving a second thought to the systems running behind them. Through your work, you have helped shape how billions of people navigate the world. When did you first recognize the significance of influencing something so essential, yet largely invisible?

I am not sure I agree with the idea of responsibility in that sense. Throughout my career, I have mostly been doing things I enjoyed or found interesting. I was always a keen geographer from school onwards, and I often say, somewhat tongue in cheek, that geography is the one true science because it tries to explain the world around us, both physically and socially.

I was fortunate to study geography at a time when computers were coming to the fore and digital geography, including GIS and remote sensing, was beginning to take shape. Like many people of my generation in this industry, the first half of our careers was quite frustrating. We had powerful tools, but we lacked data. Capturing data on a global scale was complex, expensive, and difficult.

That changed with the arrival of Google Maps and Google Earth around 2007. Suddenly, data became widely available on a single platform, and it became possible to roll this technology out to the mainstream. It really felt like a “before Google” and “after Google” moment for the industry.

Today, we all use this technology day-to-day on our phones and through the services we rely on. That’s wonderful, and it’s delivering on the promise that was always there. We may not always be recognized as an industry, but I am comfortable with that. We play a small but important role in many different activities. I don’t feel a burden of responsibility so much as pride that we finally got to where we wanted to be.

Working at a global scale brings constant trade-offs: speed versus accuracy, innovation versus stability, openness versus control. How does operating at that scale shape the way you make decisions as a technologist and leader?

That’s where experience really matters. You often begin with an optimistic view that you will be able to roll something out globally and that it will work everywhere in the same way. A good example is Street View. When Street View was first launched in North America, it was very well received. We thought we’d worked out how to capture the data economically and that we could roll it out everywhere. But when we introduced it in Europe, particularly in Germany, we ran into serious issues around privacy, reflecting the fact that expectations of privacy differ across cultures.

That was a real learning moment. Despite having an obvious technical solution, once it hits people, you have to adapt. Technology that impacts people isn’t really a technology problem. It is a sociology problem. Users always have a vote, and you have to accept that and change accordingly.

Standards often fade into the background when everything works smoothly. Can you recall a moment in your career when the value of interoperability, or the cost of its absence, became impossible to ignore?

I have always recognized the importance of interoperability, but yes, one early experience stands out. At the time, I was working at Autodesk on one of the early web-mapping tools, Autodesk MapGuide. From a technical perspective, it was a very good product, allowing interactive vector mapping in early web browsers through a plug-in.

However, we realized that the real challenge was adoption, especially by governments. They were concerned about the reliance on browser plug-ins and the additional complexity this introduced. Around the same time, the Open GIS Consortium was developing the Web Map Service standard. From a purely technical standpoint, it wasn’t the best solution. It was slower and less capable, but it offered interoperability.

We decided to adopt it, and that decision significantly expanded our market. It taught me a fundamental lesson: interoperability often involves compromise. You may give up technical elegance, but in return you reach more people and achieve broader adoption. That commercial reality is a powerful driver for interoperability.

You describe yourself as a “lapsed aviator,” having learned to fly before English weather and the price of Avgas intervened. You now spend your time photographing aircraft. Does that way of observing from a distance influence how you think about maps, geography, or geospatial systems?

That’s a very perceptive question. Becoming a pilot was my midlife crisis. Many people buy a motorcycle, but I learned to fly. I have always loved aviation. In fact, one of my earliest memories is seeing the prototype Concorde fly over our house in South London.

Ed Parsons

However, in the UK, flying privately is difficult. The weather is unreliable, fuel is expensive, and unless you fly under instruments, which isn’t much fun, it’s hard to do regularly. Eventually, I decided that instead of spending money on flying, I would buy cameras and photograph aircraft.

There is an artistic element to photography, especially now with digital processing. But there is also something deeper: seeing things from a distance and capturing a specific moment in time and space. I store my photos geographically, not chronologically. That connection between place, time, and memory really matters to me. Photography will not disappear, even with generative AI, because it is about remembering being there, that exact moment, in that exact place.

Airshow in Belgium
Ed’s favorite picture, taken at an Airshow in Belgium during a thunderstorm

 

You have been part of the OGC community for many years. What made this feel like the right moment to step into the role of Chair, and what felt personally important about taking it on?

Well, in many ways, it felt natural. OGC has been a hugely important part of my career and my life, and I see this role as a way of giving something back.

The Board’s role is to ensure the organization is healthy, financially sustainable, and relevant. The real work is done by the community, which includes the Technical Committee, Planning Committee, and volunteers who contribute their time and expertise. My role is to make sure that this ecosystem continues to thrive.

OGC brings people together in a way that is quite special. People leave their employers at the door and work toward what is best for the industry. Compromise is central to that process, and while it takes time, it is also what makes standards meaningful.

OGC brings together governments, industry, researchers, and developers from very different contexts. From your experience, what makes collaboration across those differences genuinely work, and where does it most often struggle?

The biggest struggle lies in developing a shared understanding of the problem. Governments, software vendors, and academics often see the same issue very differently. OGC’s domain working groups exist to explore those problem spaces by helping participants understand the scope of an issue, agree on terminology, and assess feasibility before solutions are proposed.

Sometimes things fail. We may misunderstand the problem, or a solution might not work in practice. That’s not a weakness. It is part of learning. We could probably do a better job documenting those failures so future efforts can learn from them.

This kind of iterative process where problems are understood, solutions are tested, and approaches are revised, is fundamental. And it only works if the community is broad and inclusive, with perspectives from different regions and contexts.

AI is rapidly reshaping many fields, including geospatial technology. Where do you see the most significant challenges and opportunities emerging as AI becomes more deeply embedded in geospatial systems?

AI presents enormous opportunities, but also challenges. In geospatial, we have used machine learning for decades, particularly in Earth observation. Recently, we have seen global building datasets created using satellite imagery. That’s an incredible achievement.

Where it becomes harder is inference and insight. Geospatial data is not yet well structured for AI training in the way text data is. We need better semantic richness and better data models. Geography also matters since models trained in one region don’t necessarily work elsewhere.

We must not forget geography when applying AI. Things closer together are more similar than things farther apart, and that principle is not well represented in many AI models today.

Looking ahead, when you reflect on your time as Chair, what would meaningful success look like, not just for OGC, but for how geospatial information shows up in everyday life?

From an OGC perspective, success means growing the community and bringing in people beyond the technical domain, including those focused on ethics and data policy.

More broadly, success means geospatial technology continuing to embed itself into solutions that improve everyday life. A good example is ride-sharing, an industry that was fundamentally changed by real-time location. Similar impacts are possible in healthcare, forestry, agriculture, and public safety.

Geospatial technology does not need to be the headline. It’s often most successful as a contributing component that makes systems work better. Our role is to make it simple, accessible, and easy to integrate.

Is there a particular global challenge where you feel geospatial technology can make the most tangible difference for individuals?

For me, that would be public safety, as it is very close to my heart. I was involved in developing standards that allow a mobile phone’s precise location to be shared automatically with emergency services when you make an emergency call, and that has a direct impact on individuals.

I am also involved with a startup working on safer pedestrian routing at night, prioritizing well-lit routes and avoiding known risk areas. These are examples where geospatial technology operates behind the scenes but has a very real, personal impact.

That has always been the most appealing aspect for me: technology that quietly helps people when it matters most.

]]>
https://www.ogc.org/blog-article/the-world-runs-on-location-ed-parsons-on-scale-standards-and-seeing-from-a-distance/feed/ 0
Testbed Europe: Shaping the Future of Geospatial Innovation Together https://www.ogc.org/announcement/testbed-europe-shaping-the-future-of-geospatial-innovation-together/ Fri, 16 Jan 2026 14:00:19 +0000 https://www.ogc.org/?p=18546 Overview

Testbed Europe is an emerging Open Geospatial Consortium (OGC) initiative exploring how Europe’s geospatial ecosystem can evolve to meet rapidly changing technical, policy, and operational demands. Building on OGC’s proven testbed model, it provides a neutral, standards-based environment where public authorities, industry, and stakeholders can jointly explore new approaches before they become operational commitments. The initiative is being shaped collaboratively with National Mapping Agencies (NMAs), European institutions, industry, and security stakeholders, including NATO.

The Challenge

Europe’s authoritative geospatial infrastructures face mounting pressures: requirements for timeliness and cross-border consistency are increasing; defense and dual-use needs are intersecting more strongly with civil systems; cloud computing, APIs, and AI are transforming data production and access; and NMAs face capacity constraints while maintaining quality, governance, and public trust. Testbed Europe addresses these challenges pragmatically and collectively, offering a safe environment to test new technologies while managing risk and maintaining sovereignty.

What Makes Testbed Europe Different?

Testbed Europe is not a procurement program or policy instrument. It is exploratory, focused on learning through experimentation rather than prescribing outcomes. It is policy-aware, aligned with EU frameworks including INSPIRE, the Data Governance Act, and the EU AI Act. It is neutral, convened by OGC using open standards and transparent processes that reduce vendor lock-in and keep NMAs in control. It is incremental, testing ideas at a manageable scale, and it respects sovereignty, strengthening national authorities rather than bypassing them.

Eight Thematic Focus Areas

Current discussions span eight interconnected areas, which may be addressed individually or in combination:

  1. Automated Pan-European Map Production: Exploring how increased automation and modern architectures can support faster, more consistent cross-border map production while preserving authoritative quality.
  2. Hybrid Cloud Architectures: Examining patterns that combine on-premise control with cloud scalability, in line with sovereignty, security, and regulatory requirements.
  3. Evolving Defense Requirements and Dual-Use Integration: Addressing the growing overlap between civil and defense use cases, including multi-sensor integration and time-sensitive data flows.
  4. Transition from Legacy Web Services to Modern APIs: Supporting an evolutionary shift toward API-centric access while maintaining compatibility with existing OGC services.
  5. Sustainable Access Models for Authoritative Data: Exploring approaches inspired by the Wikimedia Enterprise model to balance open access with high-performance industrial use and long-term sustainability.
  6. Advanced Interoperability and Semantic Consistency: Applying OGC’s interoperability methodology to address semantic consistency across domains and systems.
  7. Artificial Intelligence and AI-Ready Data: Focusing on data quality, documentation, provenance, and governance as prerequisites for responsible and reproducible AI use.
  8. Human Resources and Capacity Constraints: Leveraging the broader OGC ecosystem to complement NMA capabilities without requiring immediate organizational change.

Alignment with EU Policy

Testbed Europe aligns closely with EU goals for a secure, interoperable Single Market for data and digital services. It supports the European Strategy for Data, the Data Governance Act, the Open Data Directive, and INSPIRE requirements for cross-border spatial data infrastructure. The initiative’s focus on hybrid cloud adoption, defense synergies, API modernization, and AI governance directly supports EU priorities for digital sovereignty, resilient infrastructure, and responsible innovation.

Current Status and Next Steps

Testbed Europe was discussed intensively at iDays in Bad Nauheim (December 2025) with representatives from NMAs across Germany, France, Spain, the Netherlands, Norway, Iceland, Finland, and other countries, plus NATO headquarters and European institutions. These discussions confirmed strong interest in Testbed Europe as a safe space to explore cloud architectures, APIs, AI, and dual-use requirements.

The initiative now moves forward with two key milestones: a February 2026 follow-up meeting (coordinated with EuroGeographics) to deepen discussions and scope pilot activities, and the OGC Member Meeting in Helsinki (week of June 1st, 2026) to finalize priorities and establish participation models.

Who Should Engage?

Testbed Europe is relevant to:

  • National Mapping and Cadastre Authorities
  • European institutions and agencies
  • Defense and security stakeholders with civil–military interfaces
  • Industry and technology providers
  • Research organizations and standards experts

Participation offers an opportunity to shape the conversation early without implying endorsement of specific outcomes.

Join the Conversation

Input and engagement from across the geospatial community are essential to ensuring Testbed Europe remains relevant, balanced, and practically useful. To contribute or learn more, please contact Muthu Kumar at mk****@*gc.org.

]]>
Tonya Wilkerson Joins the OGC Board of Directors https://www.ogc.org/announcement/tonya-wilkerson-joins-the-ogc-board-of-directors/ Thu, 15 Jan 2026 08:56:08 +0000 https://www.ogc.org/?p=18540 The Open Geospatial Consortium (OGC) announces that Tonya P. Wilkerson has joined the OGC Board of Directors. 

 “Tonya Wilkerson brings to the OGC Board of Directors deep experience from across the U.S. Intelligence Community, including senior leadership at the National Geospatial-Intelligence Agency and within defense and national security organizations,” said Peter Rabley, Chief Executive Officer of OGC. “She is highly respected for her work in GEOINT and satellite operations, and her experience adds an important perspective to the Board’s governance.” 

Wilkerson has more than three decades of distinguished service across the U.S. Intelligence Community. Most recently, she served as the Deputy Director of the National Geospatial-Intelligence Agency (NGA) from 2021 to 2024, where she led global geospatial intelligence (GEOINT) efforts to support U.S. national security. In 2024, she was the Presidential nominee for Under Secretary of Defense for Intelligence and Security. 

Wilkerson’s executive leadership experience includes serving as Associate Deputy Director for Science and Technology at the Central Intelligence Agency (CIA), where she oversaw directorate strategy and talent management, and leading the Mission Operations Directorate at the National Reconnaissance Office (NRO), managing the satellite operations enterprise. 

An electrical engineer by training, Wilkerson began her career as a project management engineer at the NRO, focusing on maturing advanced technologies for system integration. She is a dedicated mentor and has professional expertise in satellite operations, research and development, and technology integration. 

Wilkerson holds a Bachelor of Science in Electrical Engineering from Virginia Tech and a Master’s degree in Engineering Management from George Washington University. 

]]>
Announcement Regarding OGC Japan Forum https://www.ogc.org/announcement/announcement-regarding-ogc-japan-forum/ Thu, 18 Dec 2025 10:33:03 +0000 https://www.ogc.org/?p=18527 The Open Geospatial Consortium (OGC) wishes to clarify the operational structure and status of the OGC Japan Forum.

The OGC is a U.S.-incorporated non-profit organization dedicated to advancing geospatial standards, with over 350 members worldwide, including more than ten in Japan.

OGC members participate in the Technical Committee, which oversees all OGC publications, and may join Working Groups and Regional Forums. There are currently 13 Regional Forums, including the OGC Japan Forum.

The OGC Japan Forum, established in December 2014 in Tokyo, is a subgroup of the Technical Committee. It serves as a local platform for members to collaborate and discuss relevant topics. The Forum has no legal status, cannot enter into contracts, and does not conduct business activities. Its activities are limited to organizing meetings and informal cooperation with other organizations.

The Forum includes members from Japan and other countries, is managed by OGC staff, and is currently electing chairs to guide its agenda. Only the OGC, as a legal entity, may represent the Forum.

Should you need further information, please email in**@*gc.org

 

]]>
OGC Code Sprint: GEOINT Imagery Media for Intelligence, Surveillance, and Reconnaissance (GIMI) standard https://www.ogc.org/announcement/ogc-code-sprint-geoint-imagery-media-for-intelligence-surveillance-and-reconnaissance-gimi-standard/ Wed, 17 Dec 2025 11:52:03 +0000 https://www.ogc.org/?p=18393 This OGC Code Sprint, supported by is a collaborative and inclusive event designed to advance the development and implementation of OGC standards, with a primary focus on the GEOINT Imagery Media for Intelligence, Surveillance, and Reconnaissance (GIMI) standard. In addition to GIMI, participants will also work on OGC APIs and selected Integrity, Provenance, and Trust (IPT) topics. The Sprint is supported by the OGC Testbed-21 initiative and supported by  Open Geospatial Foundation (OSGeo) and the Apache Software Foundation (ASF) and thus will include software implementations from those and other organisations. We encourage participants  to bring their implementations to the code sprint.

Several geospatial standards will be featured in this code sprint. Everyone is welcome to participate and work on their preferred standard during the code sprint.

The Sprint is open to participants from across the geospatial ecosystem.

Registration is now live at https://events.ogc.org/OGC-Code-Sprint-GIMI-Focus

The in-person portion of the Code Sprint will be hosted by the USGS at the Fort Collins Science Center in Fort Collins, Colorado, USA (2150 Centre Avenue, Building C, Fort Collins, CO 80526). A virtual participation option will also be available via the OGC-Events Discord server: https://discord.gg/3uyaZZuXr3

OGC Code Sprints provide a practical environment to explore emerging ideas, improve interoperability across existing standards, and test new extensions or profiles in real-world contexts. Participation is not limited to coding; testing, documentation, design discussions, and issue reporting are all welcome. A mentor stream will be available to support participants who are new to OGC standards.

About OGC

The Open Geospatial Consortium (OGC) is a collective problem-solving community of experts from more than 360 businesses, government agencies, research organizations, and universities driven to make geospatial (location) information and services FAIR – Findable, Accessible, Interoperable, and Reusable. The global OGC Community engages in a mix of activities related to location-based technologies: developing consensus-based open standards and best-practice; collaborating on agile innovation initiatives; engaging in community meetings, events, and workshops; and more. Learn more at https://www.ogc.org/.

]]>