Google dropped S2Vec - a geospatial embedding model that predicts neighborhood income levels purely from map features (coffee shops, transit stops, building density) with zero manual labeling.

The breakthrough: It learned urban spatial grammar unsupervised. Feed it OpenStreetMap data + satellite imagery, and it auto-extracts socioeconomic patterns by analyzing Points of Interest (POI) distribution and building morphology.

Technical approach:

- Uses S2 geometry library for hierarchical spatial indexing (cells at multiple zoom levels)

- Self-supervised contrastive learning on spatial relationships between map features

- Encodes both POI semantics (what's there) and spatial topology (how things are arranged)

Why it matters for devs:

1. Zero-shot transfer to any city with OSM data - no region-specific training needed

2. Opens door for privacy-preserving demographic inference (aggregate patterns, not individual tracking)

3. Potential for urban planning APIs, real estate valuation models, and logistics optimization

The model essentially reverse-engineered how urban infrastructure correlates with economic activity - classic unsupervised feature learning, but applied to geographic space instead of image pixels.