Performanceedge computingcloudflarevercel

The Shift to Edge Computing in Modern Web Apps

Why edge computing is becoming the default for high-performance web applications, and how to use it effectively with Next.js, Cloudflare Workers, and Vercel Edge Functions.

Codolve Team8 min read
Share
The Shift to Edge Computing in Modern Web Apps

The geography of the internet is finally mattering the way it should. For decades, web servers ran in one or two data centres, and every user, regardless of where they were in the world, waited for their request to make a round trip to that data centre and back. Edge computing runs your code globally, at dozens of locations, eliminating most of that latency. Here's how it works and how to use it.

The Latency Problem Edge Solves

A user in Mumbai hitting a server in Virginia experiences, at minimum, 150–200ms of network latency before your code even starts executing. Add DNS resolution, TLS handshake, and server processing time, and a "fast" page might take 500ms to deliver the first byte.

Edge computing puts your code in 100+ geographic locations. The same Mumbai user hits a Mumbai edge node. The raw network round trip drops to 10–20ms. That's not an incremental improvement, it's an order-of-magnitude change in the user's experience.

Time to First Byte (TTFB) is a Core Web Vitals contributor and a direct ranking signal. The difference between 400ms and 40ms TTFB is visible to users and measurable in SEO performance.

How Edge Runtimes Differ from Traditional Servers

Edge runtimes are not full Node.js. They're stripped-down JavaScript runtimes optimised for speed and global distribution:

Feature Node.js (Lambda/Server) Edge Runtime
Cold start 100–500ms < 1ms
Memory Up to 10GB 128MB typical
Execution time Up to 15 min 50ms–30s (varies)
Node.js built-ins Full access None (Web APIs only)
File system Available Not available
Binary modules Available Not available
Geographic distribution 1–3 regions 100+ locations

The constraints are real, no fs, no native modules, limited CPU time. But for the right workloads, the performance trade-off is overwhelmingly positive.

Using Edge Functions in Next.js

Route-Level Edge Runtime

Opt individual API routes into the edge runtime:

// app/api/recommendations/route.ts
export const runtime = "edge";

export async function GET(request: Request) {
  const { searchParams } = new URL(request.url);
  const userId = searchParams.get("userId");

  // Use edge-compatible database clients (Neon serverless, Upstash, etc.)
  const recommendations = await getRecommendations(userId);

  return Response.json(recommendations, {
    headers: {
      "Cache-Control": "public, s-maxage=60, stale-while-revalidate=120",
    },
  });
}

Middleware: The Most Powerful Edge Use Case

middleware.ts always runs at the edge, before any request reaches your Next.js server. This is where edge computing delivers the most value:

// middleware.ts
import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";

export function middleware(request: NextRequest) {
  const { pathname, origin } = request.nextUrl;
  const country = request.geo?.country ?? "US";
  const city = request.geo?.city;

  // 1. Geo-based routing
  if (pathname === "/" && country === "GB") {
    return NextResponse.redirect(new URL("/uk", origin));
  }

  // 2. A/B testing, assign variant at the edge, no origin required
  const variant = request.cookies.get("ab-variant")?.value;
  if (!variant && pathname === "/pricing") {
    const assignedVariant = Math.random() > 0.5 ? "a" : "b";
    const response = NextResponse.next();
    response.cookies.set("ab-variant", assignedVariant, {
      maxAge: 60 * 60 * 24 * 30,
      httpOnly: true,
    });
    return response;
  }

  // 3. Rate limiting at the edge (block bots before they hit your server)
  const ip = request.headers.get("x-forwarded-for")?.split(",")[0].trim();
  const rateLimitResult = await checkRateLimit(ip);
  if (!rateLimitResult.allowed) {
    return new Response("Too many requests", { status: 429 });
  }

  return NextResponse.next();
}

export const config = {
  matcher: ["/((?!_next/static|_next/image|favicon.ico).*)"],
};

This executes in under 1ms globally, before any origin request.

Real-World Edge Use Cases

Authentication at the Edge

Validating JWTs at the edge blocks unauthorised requests before they consume any server resources:

// middleware.ts
import { jwtVerify } from "jose";

const JWT_SECRET = new TextEncoder().encode(process.env.JWT_SECRET!);

export async function middleware(request: NextRequest) {
  const protectedPaths = ["/dashboard", "/api/user", "/api/orders"];
  const isProtected = protectedPaths.some((path) =>
    request.nextUrl.pathname.startsWith(path)
  );

  if (!isProtected) return NextResponse.next();

  const token = request.cookies.get("session")?.value;

  if (!token) {
    return NextResponse.redirect(new URL("/login", request.url));
  }

  try {
    const { payload } = await jwtVerify(token, JWT_SECRET);
    // Attach user info to headers for the origin server
    const response = NextResponse.next();
    response.headers.set("x-user-id", payload.sub as string);
    response.headers.set("x-user-role", payload.role as string);
    return response;
  } catch {
    return NextResponse.redirect(new URL("/login", request.url));
  }
}

The jose library is edge-compatible (uses Web Crypto API). The jsonwebtoken library is NOT, it uses Node.js crypto. This distinction matters when choosing libraries for edge code.

Dynamic OG Images at the Edge

Generating personalised Open Graph images at the edge is a popular pattern. Vercel's @vercel/og library handles this efficiently:

// app/api/og/route.tsx
import { ImageResponse } from "@vercel/og";

export const runtime = "edge";

export async function GET(request: Request) {
  const { searchParams } = new URL(request.url);
  const title = searchParams.get("title") ?? "Default Title";
  const author = searchParams.get("author") ?? "Codolve";

  return new ImageResponse(
    (
      <div
        style={{
          display: "flex",
          flexDirection: "column",
          background: "#0f172a",
          width: "100%",
          height: "100%",
          padding: "60px",
        }}
      >
        <h1 style={{ color: "white", fontSize: "60px", fontWeight: 800 }}>
          {title}
        </h1>
        <p style={{ color: "#94a3b8", fontSize: "28px" }}>By {author}</p>
      </div>
    ),
    { width: 1200, height: 630 }
  );
}

Each blog post gets a unique, dynamically generated OG image, at the edge, in under 50ms.

Personalisation Without Cache Fragmentation

One of the hardest problems in web performance: you want to cache aggressively, but personalised content can't be cached. Edge solves this by separating personalisation logic from content:

// middleware.ts, Inject personalisation context without bypassing cache
export function middleware(request: NextRequest) {
  const userId = request.cookies.get("userId")?.value;
  const tier = request.cookies.get("tier")?.value ?? "free";

  // Don't rewrite the URL, let the cached response serve
  // Attach context that server components can read
  const response = NextResponse.next({
    request: {
      headers: new Headers({
        ...Object.fromEntries(request.headers),
        "x-user-id": userId ?? "",
        "x-user-tier": tier,
      }),
    },
  });

  return response;
}

The page is served from cache (fast). The server component reads the headers to inject user-specific data into the response (personalised). No cache fragmentation.

Edge-Compatible Libraries

Many popular libraries use Node.js APIs not available in edge runtimes. Alternatives:

Instead of Use for Edge
jsonwebtoken jose
bcrypt bcryptjs or Web Crypto subtle.digest
pg (node-postgres) @neondatabase/serverless
ioredis @upstash/redis
nodemailer resend or SendGrid HTTP API
axios fetch (built-in)

If a library uses process.env for configuration only, it usually works fine in edge environments.

When NOT to Use Edge

Edge runtimes are the wrong choice for:

  • Long-running processes: background jobs, video processing, data migrations
  • Heavy computation: ML inference, image transformation at scale
  • Node.js-dependent code: file system operations, native modules, complex ORMs
  • Database connections that need connection pooling: most traditional database drivers don't work at edge; use HTTP-based database APIs instead

For these workloads, traditional serverless (Lambda) or containerised environments are still the right choice. Edge is a complement, not a replacement.

The Migration Path

  1. Start with middleware, A/B testing, redirects, authentication headers. Zero application changes required.
  2. Move latency-sensitive API routes to edge runtime, replacing non-compatible libraries.
  3. Add geo-targeting and personalisation once you're comfortable with edge middleware.
  4. Evaluate database access, if your DB latency is 50ms and your edge function saves 100ms of network RTT, you still win.

If you're building a performance-critical web application and want to incorporate edge computing, Codolve can architect the right solution for your traffic patterns.

Frequently Asked Questions

Does edge computing work with any database?

Edge runtimes can connect to databases, but they require HTTP-based or edge-compatible database clients. Neon (PostgreSQL), PlanetScale (MySQL), Turso (SQLite via libSQL), and Upstash (Redis and QStash) all offer edge-compatible clients. Traditional database drivers that use TCP connections and connection pooling don't work in edge environments.

Is edge computing more expensive than traditional servers?

It depends on usage patterns. Edge is typically priced per request and execution time, not per hour of server uptime. For bursty, globally distributed traffic, edge is often cheaper. For sustained high-CPU workloads, traditional servers are more cost-effective.

Can I use environment variables at the edge?

Yes. Environment variables set in your hosting platform (Vercel, Cloudflare) are available in edge functions via process.env. They're injected at deploy time, not at runtime.

What's the difference between Vercel Edge Functions and Cloudflare Workers?

Both are edge runtimes with similar capabilities. Vercel Edge Functions integrate natively with Next.js middleware and API routes. Cloudflare Workers have a larger global network (300+ locations), more generous CPU limits, and additional primitives like KV storage and Durable Objects. For Next.js projects, Vercel's integration is simpler. For maximum global coverage or non-Next.js projects, Cloudflare is compelling.

Do edge functions affect SEO?

Positively. Faster TTFB from edge delivery directly improves Core Web Vitals (especially LCP) and is a ranking signal. Geo-based redirects at the edge also improve localised content delivery without SEO penalties when canonical tags are properly configured.

Tags

#edge computing#cloudflare#vercel#performance#latency#nextjs#middleware
Share
userImage1userImage2userImage3

Build impactful digital products

Ready to Start Your Next Big Project ?

Contact Us