Agence Webflow
>
AI
>
How to make a website agent ready in 2026 (technical guide)

How to make a website agent ready in 2026 (technical guide)

En bref

TL;DR : an agent ready website implements five technical layers (Link headers RFC 8288, llms.txt, Agent Skills Discovery, WebMCP, Content-Signal) that allow AI agents like Claude in Chrome or Comet to discover, read and execute actions on the site. Full implementation takes around 60 minutes via Cloudflare Transform Rules, a Cloudflare Worker, and Webflow's Custom Code Footer. It's currently a rare competitive edge in 2026, and will be a baseline requirement by 2027.

Yesterday morning, I ran an audit of mekaa.co on isitagentready.com. Score: 4 out of 12. Pretty embarrassing for an agency that sells GEO. One hour later, I had fixed everything. And along the way, I understood why 99% of websites are about to become completely invisible to AI agents over the next 18 months.

The problem isn't the content. It's that websites have been built for humans who click and read. Not for agents that execute actions. When Claude, ChatGPT Agent or Comet visit a page, they aren't looking for a menu. They're looking for exposed tools, declared skills, and HTTP headers that tell them where to go.

This article is the exact playbook of what I implemented on mekaa.co, with the code, terminal commands and pitfalls to avoid. If you're reading this in 2026 and haven't implemented anything yet, you still have a window to be ahead. By 2027, this will just be the bare minimum.

What agent ready actually means

An agent ready website isn't a site with great SEO content. It's a site that implements a precise technical stack so that an AI agent can:

  • Discover what the site offers without crawling it page by page
  • Read structured instructions on how to use it
  • Execute concrete actions through tools declared in JavaScript

Concretely, this translates into five stacked technical layers:

LayerStandardRole
Passive discovery (HTTP)Link headers RFC 8288Announce available resources in HTTP headers
Passive discovery (files)llms.txt, robots.txt, sitemap.xmlProvide text files agents know how to read
Agent skills (read)Agent Skills Discovery RFC v0.2.0Publish a JSON index of skills exposed by the site
Agent tools (execute)WebMCP APIExpose JavaScript functions the agent can run
AI content controlContent-SignalTell AI engines what they can do with your content
Key takeaway : an agent ready website combines five standards (Link headers RFC 8288, llms.txt, Agent Skills Discovery, WebMCP, Content-Signal) that let AI agents discover, read and execute actions on the site without human intervention.

Most sites already have a sitemap.xml and a robots.txt. That's the bare minimum. The real delta is climbing higher in the stack: Link headers, Agent Skills, WebMCP. That's where it gets interesting, and that's where almost nobody is positioned yet.

Step 1: add link headers RFC 8288

When an AI agent makes an HTTP request to your homepage, the first thing it inspects is the response headers. The Link header (defined by RFC 8288) lets you announce related resources without even returning HTML. It's the fastest discovery layer for a crawler.

Here's what it looks like once implemented:

</llms.txt>; rel="describedby"; type="text/plain"
</sitemap.xml>; rel="sitemap"; type="application/xml"
</robots.txt>; rel="describedby"; type="text/plain"
</.well-known/agent-skills/index.json>; rel="agent-skills-index"; type="application/json"

Implementation via cloudflare transform rules

If your site is behind Cloudflare (which I recommend for 100% of serious Webflow projects), you don't need to touch your hosting. Everything happens in Rules then Transform Rules then Modify Response Header.

Here's the exact procedure:

  1. Select your domain in the Cloudflare dashboard
  2. Go to Rules then Transform Rules then the Modify Response Header tab
  3. Click Create rule
  4. Name the rule Add Link headers for agent discovery
  5. As a filter, choose Custom filter expression and paste:
(http.host eq "www.mekaa.co" and http.request.uri.path eq "/")   or (http.host eq "mekaa.co" and http.request.uri.path eq "/")
  1. As an action, select Set static, then for the Link header, paste this value:
</llms.txt>; rel="describedby"; type="text/plain", </sitemap.xml>; rel="sitemap"; type="application/xml", </robots.txt>; rel="describedby"; type="text/plain"
  1. Deploy. The rule is active immediately.

Verifying that it works

curl -I https://www.mekaa.co/ | grep -i link

You should see the Link: header with all your resources. If not, check the filter expression. The most common mistake is targeting a path that doesn't exactly match the homepage.

Key takeaway : Link headers RFC 8288 let you announce resources like llms.txt or sitemap.xml directly via HTTP. On Cloudflare, this takes 2 minutes through Response Header Transform Rules, with no impact on hosting.

Step 2: publish an agent skills index

The Agent Skills Discovery RFC v0.2.0 (published by Cloudflare in March 2026) defines a standard for exposing the skills a website offers to AI agents. Concretely, it's a JSON file published at /.well-known/agent-skills/index.json that lists available skills with a precise schema.

Here's the expected structure:

{
  "$schema": "https://schemas.agentskills.io/discovery/0.2.0/schema.json",
  "skills": [
    {
      "name": "mekaa-geo-services",
      "type": "skill-md",
      "description": "Get information about Mekaa's GEO services...",
      "url": "/.well-known/agent-skills/mekaa-geo-services/SKILL.md",
      "digest": "sha256:6d39397bd30e3022b00995fa93b32d8c0f93ee0dc0b069c98f575391abea10f5"
    }
  ]
}

The challenge with Webflow is that you can't serve files at /.well-known/... paths. The clean solution is a Cloudflare Worker.

The cloudflare worker that serves the index

const SKILL_MD = `---name: mekaa-geo-servicesdescription: Get information about Mekaa's GEO services and how to request a GEO audit. Use when a user asks about AI search visibility, GEO consulting, or wants to work with a French GEO agency.---# Mekaa GEO ServicesMekaa is a French digital agency specialized in Generative Engine Optimization (GEO).[full skill content...]`;
async function sha256(text) {
  const buffer = new TextEncoder().encode(text);
  const hash = await crypto.subtle.digest('SHA-256', buffer);
  const hashArray = Array.from(new Uint8Array(hash));
  return hashArray.map((b) => b.toString(16).padStart(2, '0')).join('');
}
export default {
  async fetch(request) {
    const url = new URL(request.url);
    if (url.pathname === '/.well-known/agent-skills/index.json') {
      const skillDigest = await sha256(SKILL_MD);
      const index = {
        $schema: 'https://schemas.agentskills.io/discovery/0.2.0/schema.json',
        skills: [
          {
            name: 'mekaa-geo-services',
            type: 'skill-md',
            description: "Get information about Mekaa's GEO services...",
            url: '/.well-known/agent-skills/mekaa-geo-services/SKILL.md',
            digest: `sha256:${skillDigest}`,
          },
        ],
      };
      return new Response(JSON.stringify(index, null, 2), {
        headers: {
          'Content-Type': 'application/json',
          'Cache-Control': 'public, max-age=3600',
          'Access-Control-Allow-Origin': '*',
        },
      });
    }
    if (url.pathname === '/.well-known/agent-skills/mekaa-geo-services/SKILL.md') {
      return new Response(SKILL_MD, {
        headers: {
          'Content-Type': 'text/markdown; charset=utf-8',
          'Cache-Control': 'public, max-age=3600',
        },
      });
    }
    return fetch(request);
  },
};

The elegance of this approach is that the SHA-256 of the SKILL.md is computed automatically on every request. No need to maintain it manually, no risk of forgetting when you update the content.

The cloudflare routes trap

Here's a mistake I made that cost me 20 minutes: configuring the route with a subdomain wildcard. I had set *.mekaa.co/.well-known/agent-skills/* thinking it would match everything. It doesn't. The pattern *.mekaa.co only matches subdomains (blog.mekaa.co, app.mekaa.co), not the apex domain itself.

The correct setup is two separate routes:

  • mekaa.co/.well-known/agent-skills/*
  • www.mekaa.co/.well-known/agent-skills/*

Once the Worker is deployed and the routes are correctly configured, you can test:

curl https://www.mekaa.co/.well-known/agent-skills/index.json

The JSON should appear with a computed SHA-256 digest. If you see a Webflow HTML error page ("Invalid .well-known request"), the Worker isn't intercepting the request. Check your routes.

Key takeaway : an Agent Skills index is published at /.well-known/agent-skills/index.json with a precise JSON schema. On Webflow, the only clean way is a Cloudflare Worker that automatically computes the SHA-256 and serves both the JSON and the SKILL.md on the right routes.

Step 3: implement webmcp to expose executable tools

This is the most powerful and most exciting layer. WebMCP is a JavaScript API currently being standardized at the W3C that lets a website expose executable tools directly inside the user's browser. When an AI agent visits the page (via Claude in Chrome, Comet, or Chrome with experimental flags), it detects these tools and can run them in conversation with the user.

It's the exact same paradigm as the Model Context Protocol on the server side, but transposed into the browser. Hence the name: WebMCP.

A concrete example: triggering a geo audit from chatgpt

Here's one of the three tools I exposed on mekaa.co:

navigator.modelContext.registerTool({
  name: 'run_geo_audit',
  description:
    'Run an instant free GEO audit on any website. Returns visibility analysis across AI search engines like ChatGPT, Perplexity, Claude, and Gemini.',
  inputSchema: {
    type: 'object',
    properties: {
      website_url: {
        type: 'string',
        description: 'The full website URL to audit (must include https://)',
      },
    },
    required: ['website_url'],
  },
  execute: async function (input) {
    let url = input.website_url.trim();
    if (!url.startsWith('http://') && !url.startsWith('https://')) {
      url = 'https://' + url;
    }
    const auditUrl =
      '/services/audit/audit?url=' +
      encodeURIComponent(url) +
      '&utm_source=webmcp&utm_medium=agent';
    window.location.href = auditUrl;
    return { status: 'audit_started', message: 'GEO audit launched for ' + url };
  },
});

Concretely, here's the scenario this unlocks: a user opens Claude in Chrome, browses mekaa.co, and tells Claude "audit the GEO visibility of stripe.com". Claude detects the run_geo_audit tool, executes it with the provided URL, and the user lands on Mekaa's audit page with the results. Without clicking once on the site.

This is exactly what Cloudflare describes in its WebMCP blog post: turning a website into a client-side MCP server.

Installing it in webflow

It's almost trivial. In Project Settings then the Custom Code tab, in the Footer Code section, you paste your script between <script> tags. The script must start with an availability check on the API:

if (!('modelContext' in navigator)) {
  return;
}

This line is critical. Right now, the API is only available in specific browsers (Chrome with flags, Comet, Claude in Chrome). On standard Chrome, navigator.modelContext doesn't exist. Without this check, your script throws an error on 99% of visitors.

Once the code is in place and published, verify it loaded properly:

curl -s https://www.mekaa.co/ | grep -c "registerTool"

Should return the number of registered tools (3 in my case).

Key takeaway : WebMCP lets you expose JavaScript tools that AI agents can execute directly inside the browser. On Webflow, it installs in minutes via the Custom Code Footer, as long as you always check API availability before calling registerTool.

Step 4: configure content-signal strategically

Content-Signal is a Cloudflare proposal launched in September 2025. It lets you grant AI engines three granular permissions on your content:

  • search : allow or block traditional indexing (Google, Bing)
  • ai-input : allow or block real-time use in AI responses (ChatGPT Search, Perplexity, AI Overviews)
  • ai-train : allow or block use for training AI models

Many publishers panic and configure ai-train=no, ai-input=no to "protect their content". This is a massive strategic mistake when you sell GEO.

Why everything should be set to yes when doing geo

If you forbid ai-input, you voluntarily remove yourself from ChatGPT, Claude, and Perplexity responses. You're shooting yourself in the foot. For a GEO agency, it's the equivalent of an SEO consultant putting noindex on their homepage.

For ai-train, the debate is more nuanced. The argument for yes: your content enters the model's weights, and the brand becomes a knowledge baseline. When someone asks "which GEO agencies are good in France", your name can come up spontaneously without the model even running a web search. That's the holy grail of long-term GEO.

The argument for no: your content is used without compensation or attribution. But honestly, a no in Content-Signal is largely ignored by crawlers today, so it's more of a principled signal than real protection.

My recommendation for Mekaa and most B2B brands:

Content-Signal: ai-train=yes, search=yes, ai-input=yes

Key takeaway : for a serious GEO strategy, setting everything to yes in Content-Signal is the only coherent choice. Blocking ai-input means voluntarily removing yourself from AI-generated responses, which is exactly the opposite of what you're trying to do.

The full stack in one hour: concrete recap

Here's what I implemented on mekaa.co, with the actual time spent on each step:

Content-Signal: ai-train=yes, search=yes, ai-input=yes

One hour exactly. That's it. And yet mekaa.co is now technically more advanced than probably 99% of websites worldwide when it comes to agent-discovery compliance. If you find that hard to believe, run an isitagentready.com audit yourself on 10 digital agency websites and look at the scores.

What this implementation will actually change (and what it won't)

I'll be direct: in the short term, these implementations won't change a thing in your traffic or conversions. AI agents that actually exploit them still represent a negligible share of visitors in April 2026. If you're reading this hoping for an immediate lead boost, you'll be disappointed.

What changes is your positioning over the next 18 to 24 months. Here's why.

First, mainstream AI agents are coming fast. Claude in Chrome is already in beta. Comet (Perplexity) is in limited access. ChatGPT Agent exists. Google announced an agent integrated into Chrome for 2026. When these products hit GA and tens of millions of users start browsing through them, sites without WebMCP, Agent Skills, or Link headers will be invisible in this new channel.

Second, and more subtly, today's AI search engines (Perplexity, ChatGPT Search, Google AI Overviews) are already starting to read llms.txt and inspect agent-skills. It isn't publicly documented yet, but the early signals are emerging in the GEO community.

Finally, there's an argument few people see: these implementations are a technical quality signal for AI agents. A site that takes the time to publish a clean Agent Skills index signals it takes agents seriously. Models, over time, will favor these sites in their responses. It's the exact same logic as the original PageRank: Google favored sites with a clean link structure because it was a proxy for editorial quality.

Key takeaway : implementing the agent-ready stack today has almost no immediate business impact, but positions the site as a priority in future AI agent responses. It's an investment that will really pay off in 2027, not before. Those waiting for "ROI proof" will arrive two years late.

Mistakes I made (and you can avoid)

Three concrete pitfalls I ran into that nobody mentions in the official docs.

The Cloudflare route wildcard. As mentioned above, *.mekaa.co/... does not match mekaa.co. Always create two separate routes: one with www., one without.

The reflex ai-train=no. Many people (myself included at first) block training on principle. For a brand that sells GEO, that's suicidal. For a brand publishing unique proprietary data, it's debatable. For 95% of cases, yes is the right answer.

Forgetting the API check in WebMCP. If you don't check 'modelContext' in navigator before calling registerTool, your script throws an error on standard Chrome. Not catastrophic, but it pollutes the console and can break other scripts loaded after it.

Conclusion: the real cost of waiting

The classic trap with this kind of implementation is to tell yourself "I'll do it when there's real business impact". Except, by the time business impact becomes measurable, everyone will already have done it. The window of opportunity is precisely now, in April 2026, while almost nobody has done it seriously yet.

To borrow from the SEO playbook: those who implemented HTTPS in 2014 (before Google's announcement) gained free rankings. Those who did it in 2018 just avoided a penalty. The same exact cycle is playing out today with agent-discovery.

If you do GEO, SEO, or serious content, making your site agent ready isn't an option for 2027. It's a prerequisite. One hour today is better than an impossible catch-up in 18 months. You can follow the full process described in this article, or you can request a complete GEO audit on mekaa.co which includes an analysis of your agent-discovery compliance.

What is an agent ready website?

An agent ready website is a website that implements five technical layers (Link headers RFC 8288, llms.txt, Agent Skills Discovery, WebMCP, Content-Signal) allowing AI agents to discover, read and execute actions on the site without human intervention. It's different from classic SEO, which targets humans.

How long does it take to make a Webflow site agent ready?

Around 60 minutes for a complete implementation on a Webflow site already connected to Cloudflare. The breakdown: 10 min for Link headers via Transform Rules, 25 min for the Agent Skills Worker, 15 min for the WebMCP code in the Footer Custom Code, 5 min for Content-Signal, 5 min for tests.

Does WebMCP work in all browsers?

No, WebMCP is still an experimental API. As of April 2026, it's available in Claude in Chrome (Anthropic extension), Comet (Perplexity browser), and Chrome Canary with experimental flags enabled. On standard Chrome, the API doesn't exist, which is why you must always check 'modelContext' in navigator before calling registerTool.

Should I block AI training via Content-Signal?

For most B2B brands, no. Blocking ai-train reduces the chance of your brand being cited spontaneously by models. For a serious GEO strategy, allowing everything (ai-train=yes, search=yes, ai-input=yes) is the only coherent setup. The only case where blocking makes sense is for unique proprietary content meant to be monetized.

How can I check if my site is agent ready?

Run an audit on isitagentready.com with your homepage URL. The tool analyzes the presence of Link headers, the Agent Skills index, WebMCP tools, llms.txt, and gives you a score out of 12. You can also verify manually with curl: curl -I https://yoursite.com for headers, and curl https://yoursite.com/.well-known/agent-skills/index.json for the index.

Partager l’article
AI
FAQ

Questions fréquemment posées

No items found.