Resurrecting my old blog: A Leaner Next.js Stack on Cloudflare
Resurrecting my old blog: A Leaner Next.js Stack on Cloudflare
So, here's a fun scenario: you realize you haven't posted on your blog in a while (a few years). You decide to write something new, only to find out you've completely lost access to the old server it was hosted on. Sigh.
Instead of trying to dig up old credentials or revive a legacy stack, I took it as a sign. It was time for a rewrite. The goal was simple: I wanted a fast, modern stack, no servers to manage, and I wanted my content to live in plain Markdown files.
Here is how I rebuilt this blog using Next.js 16, Cloudflare Workers, D1 for dynamic comments, and a fresh new tool called Vinext.
Rescuing Content & Preserving URLs
The very first hurdle of this migration was getting the old content into a reusable format. Since I didn't have access to the old database to run an export, I had to scrape my own site while it was still limping along online.
I wrote a small Node.js crawler that used axios to fetch my old sitemap XML. For every URL found, the script used cheerio to grab the post title and the HTML content block. After that I run that HTML through turndownāa library that converts HTML into beautiful Markdown (or at least Markdown).
// Get the HTML content
const response = await axios.get(url);
const $ = cheerio.load(response.data);
const contentHtml = $('section.post-content').html();
// Find images, rewrite their paths to a local folder, and queue for download
const $content = cheerio.load(contentHtml);
$content('img').each((i, img) => {
const src = $(img).attr('src');
if (src) {
const absoluteSrc = src.startsWith('http') ? src : `http://fredrik.anderzon.se${src}`;
const imgName = path.basename(new URL(absoluteSrc).pathname);
// Queue image for download (absoluteSrc -> localImgPath)
imagesToDownload.push({ absoluteSrc, localImgPath: path.join(IMAGES_DIR, imgName) });
// Rewrite the HTML src attribute to point to the local folder.
// Important: Use an absolute path (/images/) so Next.js serves it from the public
// directory regardless of how deep the post's URL structure is.
$(img).attr('src', `/images/${imgName}`);
}
});
// Convert the *updated* HTML to Markdown
const markdown = turndownService.turndown($content.html());
// Save with frontmatter
const fileContent = `---
title: "${postTitle}"
path: "${new URL(url).pathname}"
---
${markdown}`;
After the markdown file was saved, the script iterated through the imagesToDownload array and streamed the files to the local disk using axios.
First, I organized these newly rescued Markdown files by prefixing them with their original publication date (e.g., 2016-05-10-rust-for-node-developers.md). This keeps the repository clean and chronological.
But how do you load and parse these files in a serverless Cloudflare environment, which lacks a traditional file system (fs)?
I leveraged Vite's import.meta.glob to bundle the raw Markdown strings into the build, and then used gray-matter and remark to parse them into usable HTML:
// lib/posts.ts
import matter from 'gray-matter';
import { remark } from 'remark';
import html from 'remark-html';
// Bundle all markdown files at build time
const posts = import.meta.glob('/content/posts/**/*.md', { query: '?raw', eager: true });
export async function getPostData(slugArray: string[]) {
// Find the correct raw markdown string
const fileContents = posts[targetFileKey].default;
// Parse frontmatter
const matterResult = matter(fileContents);
// Convert markdown body to HTML
const processedContent = await remark()
.use(html)
.process(matterResult.content);
return {
contentHtml: processedContent.toString(),
...matterResult.data
};
}
This gives me the raw HTML to render on the page. But how do you map these files to an old URL structure like /2016/05/10/rust-for-node-developers/ to avoid breaking old links? Breaking old URLs is a cardinal sin of web development.
Notice that the crawler script automatically saved the original URL path as a path property in the frontmatter. Next.js's dynamic catch-all route (app/[...slug]/page.tsx) grabs the incoming URL request and compares it against the path defined in the Markdown files. If it finds a match, it serves the parsed contentHtml. Old bookmarks and Google search results keep working flawlessly, entirely decoupled from the actual folder structure.
The Stack & the beauty of Vinext
With the content sorted, I needed a framework. I went with the Next.js 16 App Router because React Server Components (RSC) are pretty great when you don't want to send a megabyte of JavaScript to the client just to render text.
If you've ever tried to deploy the Next.js App Router to Cloudflare Workers, you know it's traditionally been painful. Vercel optimizes for their own platform, and the build output relies heavily on Node.js APIs that don't exist in Cloudflare's workerd runtime.
This is where Vinext comes in. Vinext is an experimental framework that replaces Next.js's default bundler (Webpack/Turbopack) with Vite. By running the RSC environment inside Vite, Vinext strips out the heavy Node.js dependencies and polyfills that Vercel usually injects. It compiles the App Router specifically for Cloudflare's runtime.
Getting started was incredibly simple. Vinext automates the migration of a standard Next.js project with a single command:
npx vinext init
This command runs a compatibility check, installs Vite and the necessary App Router-only plugins, and renames your CommonJS config files to avoid ESM conflicts. It also generates a minimal vite.config.ts. The best part? It's completely non-destructive. Your existing Next.js setup continues to work alongside Vinext.
Infrastructure as Code
Getting Vinext to deploy the static files was a breeze. But getting it to talk to my Cloudflare D1 database for the comment section required a way to securely define database bindings.
Vinext makes this easy too. When you run your first deployment (vinext deploy), it auto-generates a baseline wrangler.jsonc with the correct main entry point and asset bindings. Once that file exists, you can append your custom domains and D1 database bindings directly into it:
// wrangler.jsonc
{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "fredrik-anderzon-se",
"main": "./worker/index.ts",
"routes": [
{ "pattern": "fredrik.anderzon.se", "custom_domain": true }
],
"d1_databases": [
{
"binding": "DB",
"database_name": "fredrik-blog-db",
"database_id": "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
]
}
Now, the entire infrastructure lives in version control, keeping the local development environment perfectly in sync with production.
To actually create the database and apply the schema, you use the Wrangler CLI. Because the database bindings are already defined in wrangler.jsonc, vinext dev will pick up the local SQLite file automatically once initialized:
# Create the D1 database
npx wrangler d1 create fredrik-blog-db
# Execute the schema locally for testing
npx wrangler d1 execute fredrik-blog-db --local --file=db/schema.sql
# Execute the schema on the live Cloudflare edge
npx wrangler d1 execute fredrik-blog-db --remote --file=db/schema.sql
Connecting the Server Actions
With the database properly bound in code, Next.js Server Actions handle the form submissions. But how do you access a Cloudflare binding inside a Next.js Server Action?
Forget process.env. In the modern @cloudflare/vite-plugin setup, you use the native cloudflare:workers module.
"use server";
import { revalidatePath } from "next/cache";
// Externalized virtual module in Vinext/Cloudflare
import { env } from "cloudflare:workers";
export async function addComment(formData: FormData) {
const postSlug = formData.get('postSlug') as string;
const content = formData.get('content') as string;
// @ts-ignore - TS might complain depending on your types setup
const db = env.DB;
await db
.prepare('INSERT INTO comments (post_slug, content) VALUES (?, ?)')
.bind(postSlug, content)
.run();
// Refresh the page data automatically!
revalidatePath(`/${postSlug}`);
return { success: true };
}
Because the whole site is built on RSC, adding a comment instantly updates the UI without writing a single line of client-side state management.
Enabling the old RSS Feed again
A blog isn't much use if nobody can find it. To make sure search engines and RSS readers don't miss anything, I added dynamic route handlers for the sitemap and RSS feed. Because the Markdown files are bundled at build time using import.meta.glob, I can easily iterate over them and generate an rss.xml and sitemap.xml on the fly:
// app/rss.xml/route.ts
import { getAllPostData } from '@/lib/posts';
export async function GET() {
const posts = await getAllPostData();
const baseUrl = 'http://fredrik.anderzon.se';
const rssItemsXml = posts
.map(
(post) => `
<item>
<title><![CDATA[${post.title}]]></title>
<link>${baseUrl}${post.originalPath || `/${post.slug}`}</link>
<guid>${baseUrl}${post.originalPath || `/${post.slug}`}</guid>
<pubDate>${new Date(post.date).toUTCString()}</pubDate>
${post.excerpt ? `<description><![CDATA[${post.excerpt}]]></description>` : ''}
</item>`
)
.join('');
const rssFeed = `<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Fredrik Andersson</title>
<link>${baseUrl}</link>
<description>Musings ⢠Development ⢠Code</description>
<atom:link href="${baseUrl}/rss.xml" rel="self" type="application/rss+xml" />
<language>en</language>
${rssItemsXml}
</channel>
</rss>`;
return new Response(rssFeed, {
headers: {
'Content-Type': 'text/xml',
'Cache-Control': 'public, max-age=3600, s-maxage=86400',
},
});
}
This guarantees that whether it's Googlebot crawling the site or an old-school RSS reader fetching updates, they always get the latest, fully-rendered HTML without needing to execute a single line of client-side JavaScript.
Conclusion
Losing access to my old server was frustrating for about five minutes, but it forced me to modernize. The combination of Next.js 16's server features with Cloudflare's edge performance is incredibly fast. Vinext is still experimental, but it proves that you don't have to be locked into a single hosting provider just because you chose a specific framework. It feels like a great fit for me, and I look forward to see what the future holds for Vinext and Next.js on Cloudflare.
Now, I just need to remember to actually write posts more often.