Skip to main content

Edge Computing for Web Developers: Cloudflare Workers, Deno Deploy, and Vercel Edge

February 19, 2026

Traditional web apps run in one region. A user in Tokyo hits a server in Virginia, and every request travels 12,000 km round trip. Edge computing fixes this by running your code in data centers around the world — milliseconds from your users.

What Is Edge Computing?

Traditional (single region):
  User (Tokyo)  12,000km  Server (Virginia)  12,000km  User
  Latency: ~200ms round trip

Edge (distributed):
  User (Tokyo)  50km  Edge (Tokyo)  50km  User
  Latency: ~10ms round trip

Edge functions run in 200+ locations worldwide. When a user makes a request, it's handled by the nearest edge location — no cross-ocean round trips.

Edge vs Serverless vs Traditional

TraditionalServerlessEdge
Runs in1-3 regions1-3 regions200+ locations
Cold startNone (always on)100ms-5s< 10ms
RuntimeAnyNode.js, Python, etc.V8 isolates (JS/TS/Wasm)
LatencyHigh for distant usersHigh for distant usersLow everywhere
CostPay for idlePay per invocationPay per invocation
LimitsNone15 min timeout10-30ms CPU time

Cloudflare Workers

Cloudflare Workers run on Cloudflare's global network across 300+ cities. They use V8 isolates (the same engine as Chrome) instead of containers, giving sub-millisecond cold starts.

Getting Started

# Install Wrangler CLI
npm install -g wrangler

# Login
wrangler login

# Create a project
wrangler init my-worker
cd my-worker

Basic Worker

// src/index.ts
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url)

    if (url.pathname === '/api/hello') {
      return Response.json({
        message: 'Hello from the edge!',
        location: request.cf?.city || 'unknown',
        country: request.cf?.country || 'unknown',
      })
    }

    return new Response('Not found', { status: 404 })
  },
}

Edge API with Routing

// src/index.ts
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url)
    const path = url.pathname

    // CORS headers
    const corsHeaders = {
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
      'Access-Control-Allow-Headers': 'Content-Type',
    }

    if (request.method === 'OPTIONS') {
      return new Response(null, { headers: corsHeaders })
    }

    try {
      if (path === '/api/products' && request.method === 'GET') {
        return await getProducts(env, corsHeaders)
      }

      if (path === '/api/products' && request.method === 'POST') {
        return await createProduct(request, env, corsHeaders)
      }

      return Response.json({ error: 'Not found' }, { status: 404, headers: corsHeaders })
    } catch (err) {
      return Response.json({ error: 'Internal error' }, { status: 500, headers: corsHeaders })
    }
  },
}

async function getProducts(env: Env, headers: Record<string, string>) {
  const products = await env.KV.get('products', 'json') || []
  return Response.json(products, { headers })
}

async function createProduct(request: Request, env: Env, headers: Record<string, string>) {
  const body = await request.json()
  const products = await env.KV.get('products', 'json') || []
  products.push({ id: crypto.randomUUID(), ...body })
  await env.KV.put('products', JSON.stringify(products))
  return Response.json(products, { status: 201, headers })
}

Cloudflare KV (Key-Value Storage)

Globally distributed key-value storage with eventual consistency:

# wrangler.toml
[[kv_namespaces]]
binding = "KV"
id = "abc123"
// Read
const value = await env.KV.get('key')
const data = await env.KV.get('key', 'json')

// Write
await env.KV.put('key', 'value')
await env.KV.put('key', JSON.stringify(data), {
  expirationTtl: 3600, // 1 hour
})

// Delete
await env.KV.delete('key')

// List
const list = await env.KV.list({ prefix: 'user:' })

D1 (Edge SQL Database)

Full SQLite database at the edge:

# wrangler.toml
[[d1_databases]]
binding = "DB"
database_name = "my-app"
database_id = "abc123"
// Query
const { results } = await env.DB.prepare(
  'SELECT * FROM users WHERE email = ?'
).bind(email).all()

// Insert
await env.DB.prepare(
  'INSERT INTO users (name, email) VALUES (?, ?)'
).bind(name, email).run()

// Batch operations
await env.DB.batch([
  env.DB.prepare('INSERT INTO logs (event) VALUES (?)').bind('signup'),
  env.DB.prepare('UPDATE stats SET count = count + 1 WHERE key = ?').bind('signups'),
])

Deploy

# Dev server
wrangler dev

# Deploy to production
wrangler deploy

Deno Deploy

Deno Deploy runs Deno on a global edge network. It's the simplest edge platform — zero config, instant deploys.

Getting Started

# Install Deno
curl -fsSL https://deno.land/install.sh | sh

# Create a project
mkdir my-edge-api && cd my-edge-api

Basic Edge Server

// main.ts
Deno.serve((request: Request) => {
  const url = new URL(request.url)

  if (url.pathname === '/') {
    return new Response('Hello from Deno Deploy!', {
      headers: { 'Content-Type': 'text/plain' },
    })
  }

  if (url.pathname === '/api/time') {
    return Response.json({
      time: new Date().toISOString(),
      region: Deno.env.get('DENO_REGION') || 'local',
    })
  }

  return new Response('Not found', { status: 404 })
})

With Deno KV (Built-in Database)

Deno KV is a globally replicated database built into Deno:

// main.ts
const kv = await Deno.openKv()

Deno.serve(async (request: Request) => {
  const url = new URL(request.url)

  if (url.pathname === '/api/visits') {
    // Atomic increment
    const key = ['visits', 'total']
    const current = await kv.get<number>(key)
    const count = (current.value || 0) + 1
    await kv.set(key, count)

    return Response.json({ visits: count })
  }

  if (url.pathname === '/api/users' && request.method === 'POST') {
    const { name, email } = await request.json()
    const id = crypto.randomUUID()

    // Store with automatic expiry
    await kv.set(['users', id], { id, name, email }, {
      expireIn: 86400000, // 24 hours
    })

    return Response.json({ id, name, email }, { status: 201 })
  }

  if (url.pathname.startsWith('/api/users/') && request.method === 'GET') {
    const id = url.pathname.split('/').pop()
    const user = await kv.get(['users', id])

    if (!user.value) {
      return Response.json({ error: 'Not found' }, { status: 404 })
    }

    return Response.json(user.value)
  }

  return new Response('Not found', { status: 404 })
})

Deploy

# Link to Deno Deploy
deployctl deploy --project=my-api main.ts

# Or connect your GitHub repo for auto-deploys

Vercel Edge Runtime

Vercel's Edge Runtime runs inside Next.js and other frameworks. It's the easiest way to add edge functions to an existing app.

Edge API Routes in Next.js

// app/api/geo/route.ts
export const runtime = 'edge'

export async function GET(request: Request) {
  const { geo } = request as any

  return Response.json({
    country: geo?.country || 'unknown',
    city: geo?.city || 'unknown',
    region: geo?.region || 'unknown',
  })
}

That's it. Add export const runtime = 'edge' and your API route runs at the edge.

Edge Middleware

Run code before every request — at the edge:

// middleware.ts (in project root)
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'

export function middleware(request: NextRequest) {
  const country = request.geo?.country || 'US'
  const url = request.nextUrl

  // Redirect based on geography
  if (country === 'DE' && !url.pathname.startsWith('/de')) {
    return NextResponse.redirect(new URL(`/de${url.pathname}`, url))
  }

  // A/B testing
  const bucket = Math.random() < 0.5 ? 'control' : 'variant'
  const response = NextResponse.next()
  response.cookies.set('ab-bucket', bucket, { maxAge: 86400 })

  // Add security headers
  response.headers.set('X-Frame-Options', 'DENY')
  response.headers.set('X-Content-Type-Options', 'nosniff')

  return response
}

export const config = {
  matcher: ['/((?!_next/static|_next/image|favicon.ico).*)'],
}

Edge-Rendered Pages

// app/products/[id]/page.tsx
export const runtime = 'edge'

export default async function ProductPage({ params }: { params: Promise<{ id: string }> }) {
  const { id } = await params
  const product = await fetch(`https://api.example.com/products/${id}`, {
    next: { revalidate: 60 }, // Cache for 60 seconds at the edge
  }).then(r => r.json())

  return (
    <div>
      <h1>{product.name}</h1>
      <p>{product.description}</p>
      <p>${product.price}</p>
    </div>
  )
}

Common Edge Use Cases

1. Geo-Based Content

// Show different pricing by country
export const runtime = 'edge'

const pricing = {
  US: { currency: 'USD', price: 29 },
  EU: { currency: 'EUR', price: 27 },
  GB: { currency: 'GBP', price: 23 },
  IN: { currency: 'INR', price: 999 },
}

export async function GET(request: Request) {
  const country = (request as any).geo?.country || 'US'
  const plan = pricing[country] || pricing.US

  return Response.json(plan)
}

2. Authentication at the Edge

Validate JWTs without hitting your origin server:

// middleware.ts
import { jwtVerify } from 'jose'

export async function middleware(request: NextRequest) {
  const token = request.cookies.get('session')?.value

  if (!token) {
    return NextResponse.redirect(new URL('/login', request.url))
  }

  try {
    const secret = new TextEncoder().encode(process.env.JWT_SECRET)
    await jwtVerify(token, secret)
    return NextResponse.next()
  } catch {
    return NextResponse.redirect(new URL('/login', request.url))
  }
}

3. Rate Limiting

// Cloudflare Worker with rate limiting
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const ip = request.headers.get('CF-Connecting-IP') || 'unknown'
    const key = `rate:${ip}`

    const current = parseInt(await env.KV.get(key) || '0')

    if (current >= 100) {
      return Response.json(
        { error: 'Rate limit exceeded' },
        { status: 429, headers: { 'Retry-After': '60' } }
      )
    }

    await env.KV.put(key, String(current + 1), { expirationTtl: 60 })

    // Forward to origin
    return fetch(request)
  },
}

4. Image Optimization

// Resize and convert images at the edge
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url)
    const width = parseInt(url.searchParams.get('w') || '800')
    const format = url.searchParams.get('f') || 'webp'
    const imageUrl = url.searchParams.get('url')

    if (!imageUrl) {
      return new Response('Missing url parameter', { status: 400 })
    }

    // Cloudflare Image Resizing
    return fetch(imageUrl, {
      cf: {
        image: {
          width,
          format,
          quality: 80,
          fit: 'cover',
        },
      },
    })
  },
}

5. Feature Flags

// Edge-based feature flags — no latency overhead
export const runtime = 'edge'

const flags = {
  newCheckout: { enabled: true, percentage: 25 },
  darkMode: { enabled: true, percentage: 100 },
  betaFeature: { enabled: false, percentage: 0 },
}

export async function GET(request: Request) {
  const userId = request.headers.get('x-user-id') || 'anonymous'

  const userFlags = Object.fromEntries(
    Object.entries(flags).map(([key, flag]) => {
      if (!flag.enabled) return [key, false]
      // Deterministic bucketing based on user ID
      const hash = hashCode(`${userId}:${key}`)
      return [key, (hash % 100) < flag.percentage]
    })
  )

  return Response.json(userFlags)
}

function hashCode(str: string): number {
  let hash = 0
  for (let i = 0; i < str.length; i++) {
    hash = ((hash << 5) - hash) + str.charCodeAt(i)
    hash |= 0
  }
  return Math.abs(hash)
}

Edge Limitations

What You Can't Do at the Edge

LimitationWhyWorkaround
No Node.js APIsEdge uses V8, not NodeUse Web APIs (fetch, crypto, URL)
Limited CPU time10-50ms per requestOffload heavy work to serverless
No filesystemStateless by designUse KV, D1, or external databases
No native modulesV8 isolates onlyUse Wasm for native-speed code
Limited librariesMany npm packages use Node APIsCheck compatibility first
No WebSockets (some platforms)Connection limitsUse Durable Objects (Cloudflare)

When NOT to Use Edge

  • CPU-heavy tasks — Image processing, ML inference, video encoding
  • Long-running operations — Anything over 30 seconds
  • Complex database queries — Edge databases are limited
  • Node.js-specific codefs, child_process, native addons

Platform Comparison

FeatureCloudflare WorkersDeno DeployVercel Edge
Locations300+35+18+ (via Cloudflare)
RuntimeV8 isolatesDeno (V8)V8 isolates
LanguageJS/TS/WasmJS/TSJS/TS
Cold start< 1ms< 10ms< 10ms
CPU limit10-50ms50ms25ms
DatabaseKV, D1, R2Deno KVVercel KV, Postgres
Free tier100K req/day1M req/month1M req/month
Best forAPIs, middlewareFull appsNext.js apps

Quick Reference

// Cloudflare Worker
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    return new Response('Hello from Cloudflare!')
  },
}

// Deno Deploy
Deno.serve((request: Request) => {
  return new Response('Hello from Deno!')
})

// Vercel Edge (Next.js)
export const runtime = 'edge'
export async function GET() {
  return Response.json({ message: 'Hello from Vercel Edge!' })
}

Summary

Edge computing is no longer experimental — it's a practical tool for reducing latency:

  1. Auth and middleware — Validate tokens at the edge, never hit origin
  2. Geo-routing — Serve different content based on location
  3. API responses — Cache and serve data from the nearest edge
  4. Feature flags — Zero-latency flag evaluation
  5. Rate limiting — Block abuse before it reaches your servers

Start with your highest-traffic, lowest-complexity endpoints. Move auth checks and API caching to the edge first, then expand as you learn the platform. The latency reduction is immediate and measurable.

Recommended Posts