AtlasPI

Frequently Asked Questions

Is AtlasPI free?

Yes — the REST API, MCP server, and dataset are all under Apache License 2.0. Free for commercial and non-commercial use. No subscription, no API key, no registration.

Do I need an API key or to log in?

No. Every endpoint is publicly accessible without authentication:

curl https://atlaspi.cra-srl.com/v1/entities | head
# No auth headers needed.

CORS is enabled so you can call it from browser JavaScript directly.

Is this an internal company portal?

No. AtlasPI is a public open-source project. The domain cra-srl.com belongs to the project sponsor (CRA S.r.l., Italy), but the data, code, and infrastructure are Apache 2.0 — public.

Source: github.com/Soil911/AtlasPI

How is this different from Wikidata or Natural Earth?

What's included in the dataset?

What's an MCP server?

Model Context Protocol (MCP) is Anthropic's open standard for exposing tools to AI assistants like Claude. AtlasPI's MCP server provides 34 pre-built tools so Claude Desktop, Claude Code, or any MCP-compatible client can query historical data directly.

pip install atlaspi-mcp

Tools include: search_entities, snapshot_at_year, find_similar_entities, on_this_day, list_historical_periods, and 29 more.

How accurate is the data?

Every record carries:

61% of entities have confidence ≥ 0.6. Boundary provenance is transparent — boundary_source tells you whether a polygon comes from Natural Earth, aourednik historical maps, or is an approximate generated circle around the capital.

Why does the data use native names instead of English?

Because historical accuracy matters. The primary name for the Aztec polity is Mēxihcah, the self-designation in Classical Nahuatl. "Aztec" is a 19th-century English term. Similarly, Tawantinsuyu ("four parts together") is what the Inca Empire called itself in Quechua. Using native-language primary names (ETHICS-001) gives downstream AI agents and applications the option to preserve these distinctions.

If you want English versions, use name_variants (a list of alternative names across languages and scripts).

Can I contribute entities, events, or corrections?

Yes — open a pull request on GitHub. Entities live in data/entities/batch_*.json, events in data/events/, periods in data/periods/, chains in data/chains/.

Pipeline requirements:

Is there a rate limit?

There's a soft rate limit (60 requests/minute per IP) to prevent abuse, but no hard cap for reasonable use. If you need higher volume for a research project, contact us via GitHub issues.

Where does the "PI" in AtlasPI come from?

"PI" stands for Programmatic Interface — AtlasPI is designed to be called programmatically by software, including AI agents. Not a Raspberry Pi, not an internal product code — just a reminder that this is an API-first historical atlas.

How do AI agents like Claude or ChatGPT use this?

Three ways:

  1. Via the REST API — any agent with HTTP tool-calling can query /v1/* endpoints directly.
  2. Via the MCP server — Claude Desktop, Claude Code, Continue.dev, and other MCP clients can install atlaspi-mcp and get 34 ready-made tools.
  3. Via the OpenAI plugin spec/.well-known/ai-plugin.json describes the plugin for ChatGPT-compatible platforms.

The /llms.txt file is the canonical site-map for AI agents consuming this API.

Is the data biased? What about colonialism?

All history writing has biases. AtlasPI addresses this explicitly via 10 documented ETHICS principles (see /about). Examples:

The AI analysis pipeline also flags geographic and temporal coverage gaps to counter cultural-dominance bias.

How do I report an error?

Open an issue on GitHub with the entity/event ID, the field in error, and ideally a source citation for the correct value.