Frequently Asked Questions
Is AtlasPI free?
Yes — the REST API, MCP server, and dataset are all under Apache License 2.0. Free for commercial and non-commercial use. No subscription, no API key, no registration.
Do I need an API key or to log in?
No. Every endpoint is publicly accessible without authentication:
curl https://atlaspi.cra-srl.com/v1/entities | head
# No auth headers needed.
CORS is enabled so you can call it from browser JavaScript directly.
Is this an internal company portal?
No. AtlasPI is a public open-source project. The domain
cra-srl.com belongs to the project sponsor (CRA S.r.l., Italy),
but the data, code, and infrastructure are Apache 2.0 — public.
Source: github.com/Soil911/AtlasPI
How is this different from Wikidata or Natural Earth?
- Wikidata: encyclopedic but unstructured for geographic queries. No consistent historical boundaries, no cross-referenced dynasty chains, requires SPARQL.
- Natural Earth / OSM: modern-only. No temporal dimension for history.
- AtlasPI: real historical boundaries (from aourednik &
NE), a temporal range 4500 BCE → 2024, explicit ethical framings,
AI-agent-optimized endpoints like
/v1/snapshot/year/{year}for single-call world views.
What's included in the dataset?
- 862 historical entities — empires, kingdoms, sultanates, republics, dynasties, chiefdoms, confederations
- 490 events — battles, treaties, genocides, epidemics, revolutions, natural disasters (all with main_actor and location)
- 48 historical periods — Bronze Age, Classical Antiquity, Edo Period, Cold War, etc. (with historiographic_note documenting scholarly debates)
- 110 cities — capitals, trade hubs, religious centers, ports, academic centers
- 41 trade routes — Silk Road, Trans-Saharan, Indian Ocean, etc. (with involves_slavery flag where applicable)
- 94 dynasty chains — Chinese dynasties, Roman→Byzantine, Persian succession, Native American confederacies, etc.
- 2,400+ academic sources — Cambridge Histories, Oxford Handbooks, regional specialist works
What's an MCP server?
Model Context Protocol (MCP) is Anthropic's open standard for exposing tools to AI assistants like Claude. AtlasPI's MCP server provides 34 pre-built tools so Claude Desktop, Claude Code, or any MCP-compatible client can query historical data directly.
pip install atlaspi-mcp
Tools include: search_entities, snapshot_at_year,
find_similar_entities, on_this_day,
list_historical_periods, and 29 more.
How accurate is the data?
Every record carries:
confidence_score(0.0-1.0)status(confirmed / uncertain / disputed)sources(list with bibliographic citations)ethical_noteswhen relevant (contested framings)
61% of entities have confidence ≥ 0.6. Boundary provenance is
transparent — boundary_source tells you whether a polygon
comes from Natural Earth, aourednik historical maps, or is an approximate
generated circle around the capital.
Why does the data use native names instead of English?
Because historical accuracy matters. The primary name for the Aztec polity is Mēxihcah, the self-designation in Classical Nahuatl. "Aztec" is a 19th-century English term. Similarly, Tawantinsuyu ("four parts together") is what the Inca Empire called itself in Quechua. Using native-language primary names (ETHICS-001) gives downstream AI agents and applications the option to preserve these distinctions.
If you want English versions, use name_variants (a list
of alternative names across languages and scripts).
Can I contribute entities, events, or corrections?
Yes — open a pull request on
GitHub. Entities live in
data/entities/batch_*.json, events in data/events/,
periods in data/periods/, chains in data/chains/.
Pipeline requirements:
- Primary name in native language/script
- At least one academic source citation
- Explicit
status(confirmed/uncertain/disputed) - Ethical notes for contested or colonial framings
Is there a rate limit?
There's a soft rate limit (60 requests/minute per IP) to prevent abuse, but no hard cap for reasonable use. If you need higher volume for a research project, contact us via GitHub issues.
Where does the "PI" in AtlasPI come from?
"PI" stands for Programmatic Interface — AtlasPI is designed to be called programmatically by software, including AI agents. Not a Raspberry Pi, not an internal product code — just a reminder that this is an API-first historical atlas.
How do AI agents like Claude or ChatGPT use this?
Three ways:
- Via the REST API — any agent with HTTP tool-calling
can query
/v1/*endpoints directly. - Via the MCP server — Claude Desktop, Claude Code,
Continue.dev, and other MCP clients can install
atlaspi-mcpand get 34 ready-made tools. - Via the OpenAI plugin spec —
/.well-known/ai-plugin.jsondescribes the plugin for ChatGPT-compatible platforms.
The /llms.txt file is the canonical site-map for AI agents
consuming this API.
Is the data biased? What about colonialism?
All history writing has biases. AtlasPI addresses this explicitly via 10 documented ETHICS principles (see /about). Examples:
- Conquests are labeled as
CONQUEST, not "succession". - Genocides use the academic term
GENOCIDE, not euphemisms. - Colonial renamings are documented (Constantinople/Istanbul, Königsberg/Kaliningrad).
- Trade routes involving slavery are flagged (
involves_slavery=true). - The periodization "Pre-Columbian Era" includes a
historiographic_noteacknowledging its Eurocentric framing.
The AI analysis pipeline also flags geographic and temporal coverage gaps to counter cultural-dominance bias.
How do I report an error?
Open an issue on GitHub with the entity/event ID, the field in error, and ideally a source citation for the correct value.