How I Migrated Hundreds of Pages Without Losing My Mind
Using the Research-Plan-Implement pattern to move Kilo’s docs from Docusaurus to Markdoc
I spent three days writing redirect rules. 810 of them. It was the most boring part of migrating Kilo’s documentation from Docusaurus to Next.js with Markdoc, and I’m convinced it’s why the migration actually worked.
The whole thing took about two weeks. Hundreds of pages, a complete reorganization of our information architecture, and remarkably few 404s from external links given the scale of what we moved. I knew where I stood at every point in the process.
I credit the Research-Plan-Implement pattern we’ve been talking about at Kilo for AI-assisted work. It turns out the same framework that helps coding agents tackle complex tasks also works pretty well for humans doing infrastructure migrations.
Why We Moved
Docusaurus is fine. But we’d already built our marketing site and blog on Next.js, and maintaining two separate React frameworks for related content felt increasingly silly. Every design change required parallel work. Every new component got built twice.
Markdoc gave us what we needed — MDX-like authoring with less magic. You write markdown, you get pages. The templating happens through explicit tags instead of implicit React component resolution.
There was another motivation too: I wanted an LLM-friendly docs site. Markdown-only versions of pages, a “copy to markdown” button, structured endpoints that AI assistants could consume directly. Docusaurus didn’t have a natural path to any of that. Moving to our own Next.js stack meant we could build those features properly — more on that later.
The technical decision wasn’t complicated, but the execution plan needed to be airtight.
The Research Phase
Before touching any code, I mapped everything that existed. I cataloged every page, what it covered, and how it fit together.
This produced a document called mappingplan.md with a table showing:
Every current page URL
What content it contained
Where it should live in the new structure
What was missing or needed to be consolidated
I found problems immediately. We had /docs/features/custom-modes and /docs/configuration/modes covering overlapping content. Some “getting started” material lived under /docs/basics/ while related stuff was under /docs/getting-started/. The original structure had accumulated cruft from 18 months of different people adding pages wherever seemed convenient at the time.
The mapping also revealed gaps. We had detailed API reference pages but nothing explaining the mental model behind our MCP server integration. Users could look up individual settings but had no guide for thinking about configuration holistically.
The Plan Phase
With the inventory complete, I designed the new structure:
Get Started — installation, first task, basic concepts
Code with AI — the actual workflows: chat, applying edits, context management
Collaborate — multi-agent setups, sharing configurations
Automate — MCP servers, custom commands, scripting
Deploy & Secure — enterprise stuff, security model
Contributing — for people working on Kilo itself
Each section got a nav file (like automate.ts, code-with-ai.ts) defining its structure. This let different team members review their areas without wading through one massive sidebar config.
The plan also listed specific pages to create, pages to consolidate, and pages to remove entirely. Before writing any new content or moving any files, I knew exactly what the end state should look like.
The Redirect Strategy
Every old URL needed to map to a new one. All 810+ of them.
The result was previous-docs-redirects.js — 810+ lines of redirect rules:
{
source: "/docs/features/custom-modes",
destination: "/docs/customize/custom-modes",
permanent: true,
},
{
source: "/docs/providers/:path*",
destination: "/docs/ai-providers/:path*",
permanent: true,
},
{
source: "/docs/getting-started/your-first-task",
destination: "/docs/getting-started/quickstart",
permanent: true,
},People have bookmarked our docs. Other sites link to them. Answers all over the internet reference specific pages. If /docs/features/custom-modes suddenly 404s, that’s a broken experience for everyone who relied on that URL.
Permanent redirects (301s) also tell search engines “this content moved here permanently” so you don’t lose page authority.
Building this list wasn’t glamorous work. I wrote a script to extract all old URLs, then went through them one by one mapping to new destinations. Some were obvious. Some required tracing through the new structure to figure out where that content ended up after consolidation.
Adding LLM-Friendly Features
While I had the docs infrastructure open, I added something I’d been wanting: proper LLM support.
Two things:
First, a /llms.txt endpoint that generates a structured index of all documentation pages. It lists every page title, path, and a link to fetch the raw markdown. This lets AI coding assistants understand what documentation exists and where to find specific topics.
Second, an /api/raw-markdown?path=... endpoint that serves clean markdown without any HTML chrome. When an LLM needs to read our docs on tool use, it can fetch the markdown directly instead of parsing rendered HTML or getting confused by navigation elements.
These endpoints make Kilo’s docs machine-readable. An LLM can fetch clean content directly without needing to parse rendered HTML.
Validation
The final piece: lychee, a link checker that runs in CI.
Every PR that touches docs gets checked for broken links. Internal links, external links, everything. If I fat-fingered a redirect or forgot to update a cross-reference, the build fails.
This caught several mistakes before they shipped. Links to deprecated provider documentation. Internal references to pages that got consolidated under different names. A typo in one of the 810 redirect rules.
Having automated validation meant we didn’t have to guess whether we’d missed something.
What the Pattern Actually Did
The Research-Plan-Implement pattern prevented two failure modes I’ve seen kill migrations before:
Without the research phase, I would have started moving pages and discovered the architectural mess halfway through. Then I’d be simultaneously migrating infrastructure, redesigning information architecture, and writing new content. Each decision would cascade into revisiting previous decisions.
By doing research first, all the “oh no, this is messier than I thought” happened before I wrote any migration code. The plan accounted for the actual complexity, not the complexity I imagined from outside.
Migrations are also boring. After the third day of writing redirect rules, the temptation is to declare victory and ship. “We got the important pages, the rest will be fine.”
But I had the mapping document. It showed exactly what remained. No ambiguity about whether we were done, no rationalization about which pages were “important enough” to migrate properly. The checklist existed, the checklist got completed.
The Pattern for Your Work
If you’re facing a similar migration — docs framework, API versioning, database schema, whatever — the pattern is straightforward:
Research first. Before touching code, catalog what exists. Make a complete inventory. Find where the mess is hiding.
Plan before implementing. Design the end state, write it down, get it reviewed. You need to know what you’re building toward before you start.
Implement systematically. When you discover things the plan missed, update the plan first. Then continue.
Automate validation. Link checkers, schema validators, test suites — whatever proves the migration actually worked. “I think it’s done” doesn’t count.
It’s less exciting than diving in and improvising, but two weeks later we had a working docs site with no broken links.
The docs site works, the old URLs redirect properly, and the new structure makes more sense than what we had before.
The Research-Plan-Implement pattern is documented in detail on path.kilo.ai if you want to apply it to your own projects.


