Research

Geographic public data,
cross-referenced and ready.

Disasters, demographics, economics, climate, and risk normalized to the same geographic schema. Source-documented, versioned, and exportable. From query to Parquet without building the pipeline first.

One geographic key across all packs

Earthquakes, UN development indicators, FEMA risk scores, economic data. All normalized to the same loc_id schema. Ask across domains without writing custom join logic.

Source attribution in every pack

Coverage scope, QA state, source URL, and update timestamps travel with every pack. The data trail you need for citations and reproducibility is part of the release, not an afterthought.

Longitudinal data at millennium scale

1M+ earthquake events back to 2150 BC. 13,000+ storms since 1842. 45,000+ landslide events since 1760. Longitudinal depth for serious research.

The maintained layer

The data preparation is done

The most expensive part of geographic research is usually not the analysis. It is collecting datasets from separate portals, reconciling different geographic identifiers, normalizing conflicting schemas, and QA-testing joins before any actual analysis can begin.

DaedalMap's maintained packs represent that work already completed. Disasters, demographics, economics, climate, and risk are all normalized to a shared geographic schema, QA-gated before release, versioned, and documented. Cross-domain queries do not require a custom join.

  • Disasters: earthquakes (1M+ events, 2150 BC to present), tropical storms (13,000+, 1842 to present), tsunamis, volcanoes, wildfires, tornadoes, floods, landslides
  • Demographics and development: UN SDG indicators for 200+ countries, WHO health data for 198 countries, Eurostat for 37 European countries
  • Economics: IMF balance of payments for 195 countries, OWID CO2 data for 217 countries (1750 to 2024)
  • Risk and vulnerability: FEMA National Risk Index at county and tract scale, social vulnerability and community resilience metrics

Collaboration

Bring your own data into the same system

The pack schema is public. If you have your own datasets - survey results, field measurements, custom raster aggregations, institutional data - they normalize to the same loc_id schema and work alongside maintained packs in the same Research workspace.

A climate researcher who brings land surface temperature rasters for a county study can cross-reference them against FEMA risk scores, NLCD impervious surface, and building footprint data in the same query session. The local data travels with the same geographic key as the maintained packs. Other researchers working with the same geography can query across both.

The result is interconnected work: your research data becomes queryable in context, not an isolated file that has to be re-explained every time someone else picks it up.

Read the pack schema guide

Example queries

What you can ask

  • How does earthquake frequency in Pacific Rim countries correlate with GDP per capita over the past 30 years?
  • Which US census tracts have the highest combined FEMA risk score and social vulnerability?
  • How did UN SDG 3 health indicators change across Southeast Asia in the decade following major tsunami events?
  • Which Fairfax County block groups have the highest impervious surface fraction and the highest observed land surface temperature?
  • What is the historical storm frequency trend for Atlantic basin hurricanes above Category 3 since 1950?

What stays public

  • Open runtime engine
  • Open schema and data model
  • Public pack documentation
  • Bring-your-own-data compatibility
  • Self-hosted deployment path

What the pack operating layer covers

  • Source discovery and converter maintenance
  • Schema normalization and loc_id alignment
  • QA-gated releases with coverage metadata
  • Freshness, versioning, and update cadence
  • Hosted access and runtime convenience

Access paths

Hosted, local, or self-hosted

The hosted app is the fastest path in. For research involving sensitive or proprietary data, the same engine runs locally with no cloud dependency. Export to CSV or Parquet for use in R, Python, Stata, or QGIS. Pack metadata and source attribution export alongside the data.