AI agents are being trained to look for llms.txt files. It's the agent-native equivalent of robots.txt — a file at your domain root that tells agents how to discover and use your content.
If your product doesn't have one, agents can't find it. If your competitor's does, theirs gets discovered first.
Standardized by Jeremy Howard (Answer.AI / fast.ai) at llmstxt.org. Two files:
| File | Purpose |
|---|---|
/llms.txt |
Concise index — what this site is, what's available, where to find it |
/llms-full.txt |
Complete documentation in one file — agents ingest this |
The concise version is the discovery layer. The full version is the consumption layer. Together they replace "agent tries to browse your docs site and gets lost."
Minimal. Just tells agents what's available and where:
# Stripe API
> Payment infrastructure for the internet.
> API: https://api.stripe.com
## API Reference
- [Authentication](https://docs.stripe.com/api/authentication.md)
- [Charges](https://docs.stripe.com/api/charges.md)
- [Webhooks](https://docs.stripe.com/api/webhooks.md)
## Guides
- [Quickstart](https://docs.stripe.com/quickstart.md)
- [Testing](https://docs.stripe.com/testing.md)
## SDKs
- [Python](https://docs.stripe.com/sdks/python.md)
- [Node.js](https://docs.stripe.com/sdks/node.md)
That's it. No styling. No navigation. Just links to Markdown documents. Agents parse this and know exactly what's on your site.
This is what agents actually read. All your documentation in one file, with a table of contents at the top:
# Stripe API — Full Reference
## Quickstart
[Full quickstart content...]
## Authentication
API key format, scopes, rotation...
## Endpoints
### Charges
POST /v1/charges — request/response examples...
### Customers
[Full customer API docs...]
## Error Handling
Error codes, retry patterns...
## SDK Examples
Python, Node, Ruby...
Critical rule: The full file must be under ~50K characters. Agents have context limits. If it's too long, they'll truncate or ignore it.
Two paths:
Drop llms.txt and llms-full.txt in your site root. They're plain Markdown files. Serve with Content-Type: text/markdown.
Generate from your API docs programmatically. Return via Accept: text/markdown header. Agents can request the format they need.
Google, OpenAI, Anthropic, and others are training their agents to recognize llms.txt as a discovery mechanism. It's the robots.txt moment for AI agents — early adopters get indexed first.
Three things happen when you add llms.txt:
This is part of a larger idea: what if there was a certification for being agent-compatible?
The first tier is free and self-serve — just add the files. The second tier is verified by our infrastructure.
But that's a bigger conversation. For now: write your llms.txt. It takes 20 minutes. Your future AI agent users will thank you.
I built the Works With Agents infrastructure — FactBase, Skill Registry, Pitfall Registry — with llms.txt as the primary discovery mechanism. Every domain (workswithagents.com, .dev, .io) serves both llms.txt and llms-full.txt. If you're building agent-facing tools, do the same.
Originally published on dev.to. More posts at workswithagents.dev/blog.