What is agent.json? — The Standard for AI Agent API Discovery (2026)

agent.json is a machine-readable specification that describes APIs for AI agents. Learn the format, how it works, why every API needs one, and how to implement it.

Definition

agent.json is a machine-readable JSON file that describes an API's capabilities, actions, inputs, and outputs in a format AI agents can parse and execute. It is served at a well-known URL — typically /.well-known/agent.json — and acts as the primary entry point for any AI agent attempting to discover, understand, and use your API programmatically.

Where traditional API documentation is written for human developers — with prose explanations, code samples, and getting-started guides — agent.json is written exclusively for machines. Every field is structured, typed, and designed to be consumed by an AI model or agent framework without human intervention.

An agent.json file typically contains: the API's name and description, a base URL, authentication requirements, and an array of actions. Each action defines a specific operation the API can perform, including its HTTP method, path, typed input and output schemas, a natural-language description, and — critically — reasoning documentation that tells the agent when andwhy to use that action.

Why agent.json exists

The core problem agent.json solves is API invisibility. As of 2026, there are millions of APIs available on the internet, but AI agents have no reliable way to find them, understand what they do, or figure out how to call them correctly. Without structured metadata, APIs are effectively invisible to autonomous agents.

Human developers solve this problem by reading documentation, following tutorials, and writing integration code. But AI agents cannot browse documentation sites the way humans do. They need a single, parseable file that answers three questions: What can this API do?, How do I call each action?, and When should I use each action?

Before agent.json, every agent framework had its own approach to tool definitions. OpenAI used function calling schemas. LangChain had its own tool format. Anthropic introduced MCP. This fragmentation meant API providers had to create multiple descriptions of their API for different ecosystems. agent.json provides a single, framework-agnostic specification that any agent can consume.

Think of it as what robots.txt did for web crawlers or sitemap.xml did for search engines — agent.json creates a universal discovery point for AI agents. For a deep dive into why this matters, see API Discovery for AI Agents.

The agent.json format

An agent.json file follows a consistent structure. At the top level, it includes metadata about the API (name, description, version, base URL, authentication). The core of the file is the actions array, which contains one entry for each operation the agent can perform.

Here is a complete, realistic example for a project management API:

.well-known/agent.json
{
  "schema": "https://useelba.com/schemas/agent.json/v1",
  "name": "ProjectBoard API",
  "description": "Project management API with tasks, boards, and team collaboration.",
  "version": "2.1.0",
  "baseUrl": "https://api.projectboard.io/v2",
  "auth": {
    "type": "bearer",
    "header": "Authorization"
  },
  "actions": [
    {
      "name": "createTask",
      "description": "Create a new task in a project board.",
      "reasoning": "Use when the user wants to add a new task or work item. Requires a valid boardId. Returns the created task with its assigned ID and status.",
      "method": "POST",
      "path": "/tasks",
      "input": {
        "type": "object",
        "required": ["boardId", "title"],
        "properties": {
          "boardId": {
            "type": "string",
            "description": "ID of the board to add the task to"
          },
          "title": {
            "type": "string",
            "description": "Task title (max 200 characters)"
          },
          "assignee": {
            "type": "string",
            "description": "User ID of the assignee (optional)"
          },
          "priority": {
            "type": "string",
            "enum": ["low", "medium", "high", "urgent"]
          }
        }
      },
      "output": {
        "type": "object",
        "properties": {
          "id": { "type": "string" },
          "title": { "type": "string" },
          "status": { "type": "string" },
          "createdAt": { "type": "string", "format": "date-time" }
        }
      },
      "example_prompts": [
        "Create a task called 'Fix login bug' on the Engineering board",
        "Add a high-priority task for the Q3 release",
        "Make a new task and assign it to Sarah"
      ]
    },
    {
      "name": "listTasks",
      "description": "List all tasks in a board with optional filtering.",
      "reasoning": "Use to retrieve existing tasks. Supports filtering by status and pagination via limit/offset. Use boardId to scope results to a specific board.",
      "method": "GET",
      "path": "/boards/{boardId}/tasks",
      "input": {
        "type": "object",
        "required": ["boardId"],
        "properties": {
          "boardId": { "type": "string" },
          "status": {
            "type": "string",
            "enum": ["open", "in_progress", "done"]
          },
          "limit": { "type": "number", "default": 20 },
          "offset": { "type": "number", "default": 0 }
        }
      },
      "output": {
        "type": "object",
        "properties": {
          "tasks": { "type": "array" },
          "total": { "type": "number" },
          "hasMore": { "type": "boolean" }
        }
      },
      "example_prompts": [
        "Show me all open tasks on the Engineering board",
        "What tasks are in progress?",
        "List the last 10 completed tasks"
      ]
    },
    {
      "name": "updateTask",
      "description": "Update an existing task's properties.",
      "reasoning": "Use when the user wants to change a task's title, status, assignee, or priority. Requires the task ID. Only include fields that are changing.",
      "method": "PATCH",
      "path": "/tasks/{taskId}",
      "input": {
        "type": "object",
        "required": ["taskId"],
        "properties": {
          "taskId": { "type": "string" },
          "title": { "type": "string" },
          "status": {
            "type": "string",
            "enum": ["open", "in_progress", "done"]
          },
          "assignee": { "type": "string" },
          "priority": {
            "type": "string",
            "enum": ["low", "medium", "high", "urgent"]
          }
        }
      },
      "output": {
        "type": "object",
        "properties": {
          "id": { "type": "string" },
          "title": { "type": "string" },
          "status": { "type": "string" },
          "updatedAt": { "type": "string", "format": "date-time" }
        }
      },
      "example_prompts": [
        "Mark task-123 as done",
        "Change the priority of the login bug to urgent",
        "Reassign task-456 to Mike"
      ]
    }
  ]
}

Key fields to notice in this example:

  • reasoning — tells the agent when to use each action and provides contextual hints. This is the field that distinguishes agent.json from traditional API specs.
  • example_prompts — natural-language examples of user requests that should trigger this action. Helps the agent match user intent to the correct API call.
  • typed input/output schemas — full JSON Schema definitions that let the agent construct valid requests and parse responses without guesswork.
  • auth block — standardized authentication description so the agent knows how to authenticate before making any calls.

How AI agents use agent.json

The agent.json discovery flow follows a predictable sequence that mirrors how agents reason about tool use:

1. Discovery

The agent finds the API, either by querying a registry (like the Elba Agent Registry), receiving a URL from the user, or crawling well-known paths on a domain. It fetches /.well-known/agent.json.

2. Parsing

The agent parses the JSON structure, extracting the list of available actions, their input requirements, and reasoning documentation. This information is loaded into the agent's tool context.

3. Action selection

When the agent receives a user request, it evaluates the available actions using the description, reasoning, and example_prompts fields. The agent selects the action that best matches the user's intent.

4. Request construction

Using the typed input schema, the agent constructs a valid request with the correct parameters, types, and required fields. The schema eliminates the guesswork that causes API call hallucinations.

5. Execution and response parsing

The agent makes the HTTP request using the base URL, path, method, and auth from the spec. The output schema tells the agent what to expect in the response and how to interpret the results for the user.

agent.json vs OpenAPI/Swagger

OpenAPI (formerly Swagger) has been the industry standard for describing REST APIs since 2015. It is an excellent format for generating client SDKs, API reference documentation, and testing tools. But it was designed for human developers, not AI agents.

The fundamental difference is orientation. OpenAPI is resource-oriented: it organizes everything around URL paths, HTTP methods, request bodies, and response codes. This mirrors how developers think about REST — “I need to POST to /tasks with this JSON body.”

agent.json is action-oriented: it organizes everything around what an agent can do. Instead of describing endpoints, it describes capabilities — “createTask: create a new task, use when the user needs to add work items.” This maps directly to how AI agents reason about tool use: they think in terms of actions and goals, not HTTP verbs and URL paths.

AspectOpenAPIagent.json
OrientationResource/path-basedAction/capability-based
AudienceHuman developersAI agents
Reasoning docsNoYes (reasoning field)
Example promptsNoYes
DiscoveryNo standard location/.well-known/agent.json
Primary useSDK generation, docsAgent tool use, discovery

The two formats are complementary. OpenAPI remains essential for developer tooling, while agent.json serves the emerging AI agent ecosystem. For more on this paradigm shift, read Structured Actions vs REST Endpoints.

agent.json vs MCP

MCP (Model Context Protocol) is a standard created by Anthropic for connecting AI models to external tools and data sources. It defines how an agent communicates with a tool server at runtime — the protocol for sending requests and receiving responses.

agent.json and MCP are complementary, not competing. agent.json is the specification — it describes what your API can do, how to authenticate, and when each action should be used. MCP is the protocol — it defines the runtime communication layer between an agent and your API.

In practice, an agent might discover your API through agent.json, then connect to it via an MCP server. Elba generates both formats from a single source of truth, so your API is discoverable (agent.json) and connectable (MCP) simultaneously.

How to implement agent.json

Implementing agent.json for your API involves three steps:

Step 1: Define your actions

Map your API endpoints to actions. Think about what a user would ask an agent to do, not what HTTP methods your API exposes. Group related endpoints into logical actions. For example, “POST /messages” and “POST /messages/bulk” might both map to a single “sendMessage” action with different input options.

Step 2: Write reasoning documentation

For each action, write a clear reasoning field that explains when to use it. This is the most important field for agent performance. Include: the user intent that should trigger this action, prerequisites (e.g., “requires a valid boardId”), and how this action relates to others. Also add 2-3 example_prompts showing natural language requests that map to this action.

Step 3: Serve at /.well-known/agent.json

Host the file at https://yourdomain.com/.well-known/agent.json. Serve it with Content-Type: application/json and ensure it is publicly accessible (no authentication required to fetch the spec itself). Consider adding CORS headers so browser-based agents can fetch it.

Step 4: Index for discovery

Register your agent.json with discovery platforms like the Elba Agent Registry so agents can find your API when searching for relevant capabilities. Discovery registries crawl and index agent.json files, making your API searchable by capability, category, and keyword.

Or skip the manual work entirely — import your OpenAPI spec into Elba and get a complete agent.json generated automatically, along with MCP configs and llms.txt. See our guide to the best API documentation for AI agents for a full comparison of tools.

agent.json and llms.txt

llms.txt is a complementary format that provides a plain-text, high-level overview of your product or API for consumption by large language models. While agent.json provides the detailed, structured data agents need to execute API calls, llms.txt provides the broader context an LLM needs to understand what your product is and how it fits into a user's workflow.

Think of llms.txt as the introduction and agent.json as the technical reference. An LLM might read your llms.txt to decide whether your API is relevant, then fetch your agent.json to get the detailed action definitions needed for actual tool use.

In 2026, publishing both files is becoming standard practice for agent-ready APIs. Together with MCP configs, they form a complete API SEO stack for AI agents — making your API discoverable, understandable, and usable by the growing ecosystem of autonomous AI agents.

Frequently asked questions

Is agent.json an official standard?

agent.json is an emerging community-driven specification for describing APIs to AI agents. While it is not yet governed by a formal standards body like the IETF or W3C, it is gaining rapid adoption as the de facto format for AI agent API discovery. The specification is open and evolving, with contributions from API providers, agent framework developers, and the broader AI tooling ecosystem.

Where should I host my agent.json file?

The standard location is /.well-known/agent.json on your domain. This follows the well-known URI convention (RFC 8615) used by other discovery mechanisms like robots.txt and security.txt. Agents and discovery crawlers look for this path by default. You can also serve agent.json from a custom URL and reference it via a Link header or meta tag, but the well-known path ensures maximum discoverability.

Can agent.json work with existing APIs?

Yes. agent.json is a descriptive layer — it does not require changes to your actual API endpoints, authentication, or response formats. You can write an agent.json file that describes your existing REST, GraphQL, or RPC API without modifying a single line of backend code. The file simply provides a machine-readable map of what your API can do, which agents use to construct valid requests.

What's the difference between agent.json and a Swagger/OpenAPI spec?

OpenAPI (Swagger) is resource-oriented: it describes paths, HTTP methods, request bodies, and response schemas for human developers building integrations. agent.json is action-oriented: it describes what an agent can do, when it should do it, and how to reason about each action. agent.json includes reasoning documentation and example prompts that help AI agents select the right action for a task. The two formats are complementary — OpenAPI for SDK generation and developer tooling, agent.json for AI agent consumption.

Further reading

For a more in-depth exploration of agent.json with additional examples and use cases, read our blog post What is agent.json and Why It Matters for AI Agents. For the complete picture of how to make your API visible and usable by AI agents, see our comprehensive guide: Best API Documentation for AI Agents.

Related topics: What is MCP? · What is llms.txt? · How to Make Your API Usable by AI Agents

Generate your agent.json with Elba
Import your API and get a complete agent.json, MCP config, and llms.txt in minutes.

Related reading