Niantic Large Geospatial Model

Niantic Large Geospatial Model

freemium

Discover Niantic's Large Geospatial Model (LGM), a foundation AI trained on billions of geolocated images with 150+ trillion parameters to power spatial intelligence for AR, robotics, and autonomous systems.

About

Niantic's Large Geospatial Model (LGM) represents a new frontier in artificial intelligence: spatial intelligence. Where Large Language Models (LLMs) are trained on internet-scale text to understand language, the LGM is trained on billions of geolocated images of the real world to understand physical space at a global scale. Analogous to how humans intuitively fill in unseen angles of a familiar place, the LGM enables machines to infer, navigate, and reason about 3D environments by connecting local scenes to millions of similar scenes worldwide. Built on top of Niantic's Visual Positioning System (VPS) — which already powers AR experiences for games like Pokémon GO and Ingress — the LGM has trained over 50 million neural networks with a combined 150+ trillion parameters across more than one million real-world locations. Unlike typical 3D generative models that produce arbitrary unscaled assets, the LGM is bound to metric space, ensuring scale-precise spatial representations that function as next-generation maps. The LGM is intended to be a foundational infrastructure layer enabling a wide range of downstream applications: augmented reality glasses that seamlessly overlay digital content onto the physical world, robotics systems that navigate real environments with precision, autonomous vehicles that understand their surroundings, and spatial content creation tools grounded in actual geography. As wearable AR technology matures, Niantic positions the LGM as the world's future spatial operating system, making it highly relevant for developers, enterprises, and researchers working at the intersection of AI and the physical world.

Key Features

  • Global Scene Understanding: Connects individual local scenes to millions of other real-world locations globally, enabling machines to infer spatial details from unseen angles using shared geographic knowledge.
  • Massive-Scale Neural Architecture: Trained with over 50 million neural networks and more than 150 trillion parameters across one million-plus locations — one of the largest spatially-grounded AI systems ever built.
  • Metric-Precise 3D Spatial Modeling: Unlike conventional 3D generative models, the LGM operates in scale-metric space, producing precise spatial representations that function as next-generation geographic maps.
  • Visual Positioning System (VPS) Integration: Built on Niantic's production-grade VPS, which already powers AR experiences at real-world landmarks, providing a proven foundation for location-based spatial AI.
  • Cross-Scene Spatial Inference: Infers geometry, appearance, and structure for partially or never-scanned locations by leveraging learned patterns from millions of similar scanned scenes worldwide.

Use Cases

  • Building AR glasses applications that seamlessly recognize and interact with real-world landmarks and environments.
  • Enabling robotics systems to navigate complex physical environments using metric-precise spatial understanding.
  • Powering autonomous vehicle perception by connecting local scene understanding to a global spatial knowledge base.
  • Creating location-grounded 3D content for immersive AR experiences tied to specific real-world geographic coordinates.
  • Developing research tools for geospatial AI, urban modeling, and large-scale 3D scene reconstruction and inference.

Pros

  • Unprecedented Spatial Scale: With 150+ trillion parameters across 50M+ neural networks, the LGM represents one of the most ambitious spatially-grounded AI systems, enabling global-scale location reasoning.
  • Real-World Metric Grounding: Operates in precise scale-metric units rather than arbitrary 3D space, making outputs directly usable for real-world AR, robotics, and autonomous navigation applications.
  • Battle-Tested VPS Foundation: Builds on Niantic's production VPS that has already powered millions of AR interactions globally, giving the LGM a proven, real-world validated data and infrastructure backbone.

Cons

  • Early Research Stage: The LGM is currently a research initiative and concept announcement, with limited availability as a developer-accessible standalone API or commercial product.
  • Privacy Considerations: Training relies on optional player-contributed scans of public real-world locations, which may raise questions around data consent and geospatial privacy for some users.
  • Narrow Domain Applicability: Primarily relevant to AR, robotics, autonomous systems, and geospatial fields — organizations outside these verticals may find limited immediate use cases.

Frequently Asked Questions

What is a Large Geospatial Model (LGM)?

A Large Geospatial Model is an AI system trained on billions of geolocated real-world images to understand physical spaces and how they relate to one another globally — analogous to how Large Language Models are trained on text to understand language.

How does the LGM differ from traditional 3D vision models?

Unlike typical 3D vision models that produce unscaled, arbitrary 3D assets, the LGM is bound to metric geographic space. It understands how a scene relates to millions of other real-world scenes and produces scale-precise spatial representations that act as next-generation maps.

What data is used to train the LGM?

The LGM is trained on billions of images anchored to precise real-world locations, including optional player-contributed scans of publicly accessible locations collected through Niantic's games and apps. Merely playing Niantic games does not contribute to training.

What applications can the LGM power?

The LGM is designed to be a foundational layer for AR glasses, robotics navigation, autonomous vehicles, location-based content creation, and any system that needs to perceive, understand, or interact with the physical world at scale.

Is the Large Geospatial Model available for developers today?

The LGM is currently in a research and development phase. Developers can access Niantic's existing Lightship VPS platform, which underpins the LGM, to build location-aware AR experiences while the broader LGM capabilities continue to mature.

Reviews

No reviews yet. Be the first to review this tool.

Alternatives

See all