Open beta · v0.9

Give your AI a map of your codebase.

…and your team the same map — across 100+ services.

Atlas indexes services, contracts, and dependencies into a single queryable graph. Any MCP client can navigate it. Your team can too.

200+ services · 1.4M LOC · one graph
selda.start_sessionok42ms
workspace: selda-ecosystem
services: 214
languages: c#, ts, vue, rust
entry_points: 37
aggregates:
├ identity.sso
├ atlas.graphhub
├ forge.worker
└ +34 more
for ai-native devs

Your AI stops guessing and starts navigating.

The MCP server exposes Atlas to Claude Code, Cursor, and any MCP-compatible tool. Tools return compact, structured context instead of raw source dumps.

  • 01Field-level tracing across services
  • 02Compact output (~92% fewer tokens than file-read)
  • 03No source code leaves your infra
selda.callers ref=field:userId profile=data depth=5ok58ms
identity.sso → User.cs:34
└─ kafka:user-events → 3 consumers
├ billing.worker (subscription mapping)
├ atlas.graphhub (tenant binding)
└ notifications.api (channel lookup)
live playground

Try it. Right here.

Four canned prompts, real MCP tool output. No auth, no setup.

>
mcp.dispatchokstreaming…
→ selda.callers(ref="field:User.Email", profile=impact, depth=4)
→ selda.endpoints(transport="http,kafka", filter="User")
● impact summary for field:User.Email
─────────────────────────────────────
direct callers: 24 (across 7 services)
transitive (d=3): 142
breaking surfaces:
├ identity.sso — User.cs:34, AuthController.cs:128
├ billing.api — SubscriptionMapper.cs:67
├ notifications.api — EmailResolver.cs:19
└ 4 more services
contracts affected:
! kafka:user-events (field `email` in payload)
! http:GET /api/users/{id} (response schema)
recommendation: coordinate rename across 3 bounded contexts;
introduce `UserProfile.Email` alias, deprecate in 2 releases.
for teams

Ship changes with fewer surprises.

Impact analysis, structural health, path queries — the same graph, team-facing use cases.

selda.callers UserService.UpdateEmailwarn68ms
direct callers: 12 (4 services)
transitive (d=3): 87
! crosses 2 bounded contexts
! touches outbox `public.domain_events`
! invalidates 3 cached projections
PR review — know the blast radius before merge
selda.path POST /api/orders → public.ledgerok91ms
path 1 (6 hops) order.api → order.domain
→ kafka:orders → billing.worker → ledger
path 2 (8 hops) via audit.worker
path 3 (9 hops) via fallback dunning.worker
Onboarding — walk a new hire from endpoint to table
selda.healthwarn3 warnings
shared db: billing.api ↔ billing.worker
kafka orphans: `user-events` — 1 prod, 0 consumers
version drift: Selda.Shared.Messaging 1.2.4 … 1.5.8
Architecture review — surface drift, orphans, and cross-boundary writes
integration surface

All your integrations in one surface.

HTTP, Kafka, gRPC, SignalR, DB, Redis, RabbitMQ, background workers — one query, one view.

Currently supported: C#, PostgreSQL, MSSQL, Redis, Elasticsearch. TypeScript, Vue, Rust in beta.

selda.endpoints service=order.apiok34ms
http IN 6 endpoints
POST /api/orders → Create
GET /api/orders/{id} → GetById
PUT /api/orders/{id} → Update
POST /api/orders/{id}/cancel → Cancel
GET /api/orders?status=... → List
POST /api/orders/{id}/refund → Refund
kafka OUT 2 topics
order-placed (produced)
order-paid (produced)
kafka IN 1 topic
payment-completed (consumed)
how it works

How it works

Sync repositories

Use TUI client or CI integration to push source to Atlas

Automatic analysis

Atlas builds the cross-source graph and detects structural patterns

Query from anywhere

MCP from Claude Code/Cursor, web UI, or HTTP API

pricing

Pricing

SELECT plan, quota, storage, limits
FROM selda.plans
WHERE billing = 'yearly'
AND audience = 'individual';
Free
Try before you commit
Free
free forever
×0.2
limits · relative to Pro
20 MB of source
  • Public graphs only
  • MCP server (anon)
  • Community support
Start free
recommended
Pro
For working devs
$15/mo
$180/year · billed yearly
×1
baseline limits
200 MB of source
  • Private repos
  • MCP server (personal key)
  • Full property tracing
  • Health reports
  • Email support
Start Pro
Max
For heavy analysis
$39/mo
$468/year · billed yearly
×10
limits · vs Pro
2 GB of source
  • Everything in Pro
  • Priority analyzer queue
  • Extended retention
  • Priority email support
Start Max
changelog

Latest releases

*a4f2c1e2026-04-18feat(mcp): add path tool — top-3 shortest paths
Adds a new `path` MCP tool that returns the K shortest structural paths between any two graph nodes. Default K=3. Respects bounded-context edges. • New tool: path(from, to, k) • Token-efficient compact output • Works across HTTP, kafka, gRPC, DB edges
*9b1e88a2026-04-10feat(sync): chunked snapshots per commit
Large repositories now upload as chunked snapshots, one per commit range. • Sync throughput up 4.2× • Resumable on failure • Per-commit delta graph
*3e7d4c22026-04-03fix(analyzer): queue backpressure
Fixes deadlock in analyzer queue under sustained high volume (>2k files/min).
blog

From the blog

The Atlas blog now lives on selda.tech — alongside notes from the wider team.

faq

FAQ

How do plan limits work?>
Every plan has a daily analyzer quota (resets 00:00 UTC) and a weekly safety net. We don't show raw numbers — plans are expressed relative to Pro (×0.2 Free, ×1 Pro, ×10 Max). Storage is the max compressed source size Atlas will keep in the graph.
What happens if I hit the limit?>
Two options, configurable per workspace. Hard cap — analyzer pauses until the next daily reset (default for Free / Pro). Soft overage — metered $0.004 per analyzer-second past the quota (opt-in for Pro / Max / Team, billed monthly). Enterprise has no quotas on self-hosted deployments.
What does Atlas analyze?>
Source code across languages, plus infra manifests (kafka topics, HTTP routes, DB schemas, gRPC protos, Redis keys) to build a unified cross-service graph.
How does Atlas connect to my repositories?>
Via the Atlas TUI client or CI integration — source is uploaded to Atlas, analyzed in our secure workers, and stored encrypted (AES-256 at rest). If you need source to never leave your infra, run Atlas self-hosted on the Enterprise plan.
What is Property Tracing?>
Following a single field (like `User.Email`) across all services, contracts, and persistence layers that touch it.
What is Impact Analysis?>
A precomputed transitive closure over call/data edges — tells you exactly what breaks if you change X.
Can Atlas detect architectural problems automatically?>
Yes — shared databases, kafka orphans, version drift, cycle-violations in bounded contexts, and more.
What is the MCP Server integration?>
A Model Context Protocol server that exposes Atlas tools to Claude Code, Cursor, and any MCP-compatible client.
Is my code stored on your servers?>
Yes — on our managed plans (Free / Pro / Max / Team) your source is stored encrypted alongside the graph, so Atlas can re-analyze on every push without re-uploading everything. Storage size is what the plan quota counts. If you need code to never leave your network — that's what Enterprise self-hosted is for: both the graph and the source stay entirely on your infra.
What languages and technologies are supported?>
C#, PostgreSQL, MSSQL, Redis, Elasticsearch today. TypeScript, Vue, Rust in beta.