Traditional web apps run in one region. A user in Tokyo hits a server in Virginia, and every request travels 12,000 km round trip. Edge computing fixes this by running your code in data centers around the world — milliseconds from your users.
What Is Edge Computing?
Traditional (single region):
User (Tokyo) → 12,000km → Server (Virginia) → 12,000km → User
Latency: ~200ms round trip
Edge (distributed):
User (Tokyo) → 50km → Edge (Tokyo) → 50km → User
Latency: ~10ms round tripEdge functions run in 200+ locations worldwide. When a user makes a request, it's handled by the nearest edge location — no cross-ocean round trips.
Edge vs Serverless vs Traditional
| Traditional | Serverless | Edge | |
|---|---|---|---|
| Runs in | 1-3 regions | 1-3 regions | 200+ locations |
| Cold start | None (always on) | 100ms-5s | < 10ms |
| Runtime | Any | Node.js, Python, etc. | V8 isolates (JS/TS/Wasm) |
| Latency | High for distant users | High for distant users | Low everywhere |
| Cost | Pay for idle | Pay per invocation | Pay per invocation |
| Limits | None | 15 min timeout | 10-30ms CPU time |
Cloudflare Workers
Cloudflare Workers run on Cloudflare's global network across 300+ cities. They use V8 isolates (the same engine as Chrome) instead of containers, giving sub-millisecond cold starts.
Getting Started
# Install Wrangler CLI
npm install -g wrangler
# Login
wrangler login
# Create a project
wrangler init my-worker
cd my-workerBasic Worker
// src/index.ts
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url)
if (url.pathname === '/api/hello') {
return Response.json({
message: 'Hello from the edge!',
location: request.cf?.city || 'unknown',
country: request.cf?.country || 'unknown',
})
}
return new Response('Not found', { status: 404 })
},
}
Edge API with Routing
// src/index.ts
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url)
const path = url.pathname
// CORS headers
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type',
}
if (request.method === 'OPTIONS') {
return new Response(null, { headers: corsHeaders })
}
try {
if (path === '/api/products' && request.method === 'GET') {
return await getProducts(env, corsHeaders)
}
if (path === '/api/products' && request.method === 'POST') {
return await createProduct(request, env, corsHeaders)
}
return Response.json({ error: 'Not found' }, { status: 404, headers: corsHeaders })
} catch (err) {
return Response.json({ error: 'Internal error' }, { status: 500, headers: corsHeaders })
}
},
}
async function getProducts(env: Env, headers: Record<string, string>) {
const products = await env.KV.get('products', 'json') || []
return Response.json(products, { headers })
}
async function createProduct(request: Request, env: Env, headers: Record<string, string>) {
const body = await request.json()
const products = await env.KV.get('products', 'json') || []
products.push({ id: crypto.randomUUID(), ...body })
await env.KV.put('products', JSON.stringify(products))
return Response.json(products, { status: 201, headers })
}
Cloudflare KV (Key-Value Storage)
Globally distributed key-value storage with eventual consistency:
# wrangler.toml
[[kv_namespaces]]
binding = "KV"
id = "abc123"// Read
const value = await env.KV.get('key')
const data = await env.KV.get('key', 'json')
// Write
await env.KV.put('key', 'value')
await env.KV.put('key', JSON.stringify(data), {
expirationTtl: 3600, // 1 hour
})
// Delete
await env.KV.delete('key')
// List
const list = await env.KV.list({ prefix: 'user:' })D1 (Edge SQL Database)
Full SQLite database at the edge:
# wrangler.toml
[[d1_databases]]
binding = "DB"
database_name = "my-app"
database_id = "abc123"// Query
const { results } = await env.DB.prepare(
'SELECT * FROM users WHERE email = ?'
).bind(email).all()
// Insert
await env.DB.prepare(
'INSERT INTO users (name, email) VALUES (?, ?)'
).bind(name, email).run()
// Batch operations
await env.DB.batch([
env.DB.prepare('INSERT INTO logs (event) VALUES (?)').bind('signup'),
env.DB.prepare('UPDATE stats SET count = count + 1 WHERE key = ?').bind('signups'),
])Deploy
# Dev server
wrangler dev
# Deploy to production
wrangler deployDeno Deploy
Deno Deploy runs Deno on a global edge network. It's the simplest edge platform — zero config, instant deploys.
Getting Started
# Install Deno
curl -fsSL https://deno.land/install.sh | sh
# Create a project
mkdir my-edge-api && cd my-edge-apiBasic Edge Server
// main.ts
Deno.serve((request: Request) => {
const url = new URL(request.url)
if (url.pathname === '/') {
return new Response('Hello from Deno Deploy!', {
headers: { 'Content-Type': 'text/plain' },
})
}
if (url.pathname === '/api/time') {
return Response.json({
time: new Date().toISOString(),
region: Deno.env.get('DENO_REGION') || 'local',
})
}
return new Response('Not found', { status: 404 })
})With Deno KV (Built-in Database)
Deno KV is a globally replicated database built into Deno:
// main.ts
const kv = await Deno.openKv()
Deno.serve(async (request: Request) => {
const url = new URL(request.url)
if (url.pathname === '/api/visits') {
// Atomic increment
const key = ['visits', 'total']
const current = await kv.get<number>(key)
const count = (current.value || 0) + 1
await kv.set(key, count)
return Response.json({ visits: count })
}
if (url.pathname === '/api/users' && request.method === 'POST') {
const { name, email } = await request.json()
const id = crypto.randomUUID()
// Store with automatic expiry
await kv.set(['users', id], { id, name, email }, {
expireIn: 86400000, // 24 hours
})
return Response.json({ id, name, email }, { status: 201 })
}
if (url.pathname.startsWith('/api/users/') && request.method === 'GET') {
const id = url.pathname.split('/').pop()
const user = await kv.get(['users', id])
if (!user.value) {
return Response.json({ error: 'Not found' }, { status: 404 })
}
return Response.json(user.value)
}
return new Response('Not found', { status: 404 })
})
Deploy
# Link to Deno Deploy
deployctl deploy --project=my-api main.ts
# Or connect your GitHub repo for auto-deploysVercel Edge Runtime
Vercel's Edge Runtime runs inside Next.js and other frameworks. It's the easiest way to add edge functions to an existing app.
Edge API Routes in Next.js
// app/api/geo/route.ts
export const runtime = 'edge'
export async function GET(request: Request) {
const { geo } = request as any
return Response.json({
country: geo?.country || 'unknown',
city: geo?.city || 'unknown',
region: geo?.region || 'unknown',
})
}That's it. Add export const runtime = 'edge' and your API route runs at the edge.
Edge Middleware
Run code before every request — at the edge:
// middleware.ts (in project root)
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
export function middleware(request: NextRequest) {
const country = request.geo?.country || 'US'
const url = request.nextUrl
// Redirect based on geography
if (country === 'DE' && !url.pathname.startsWith('/de')) {
return NextResponse.redirect(new URL(`/de${url.pathname}`, url))
}
// A/B testing
const bucket = Math.random() < 0.5 ? 'control' : 'variant'
const response = NextResponse.next()
response.cookies.set('ab-bucket', bucket, { maxAge: 86400 })
// Add security headers
response.headers.set('X-Frame-Options', 'DENY')
response.headers.set('X-Content-Type-Options', 'nosniff')
return response
}
export const config = {
matcher: ['/((?!_next/static|_next/image|favicon.ico).*)'],
}Edge-Rendered Pages
// app/products/[id]/page.tsx
export const runtime = 'edge'
export default async function ProductPage({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params
const product = await fetch(`https://api.example.com/products/${id}`, {
next: { revalidate: 60 }, // Cache for 60 seconds at the edge
}).then(r => r.json())
return (
<div>
<h1>{product.name}</h1>
<p>{product.description}</p>
<p>${product.price}</p>
</div>
)
}Common Edge Use Cases
1. Geo-Based Content
// Show different pricing by country
export const runtime = 'edge'
const pricing = {
US: { currency: 'USD', price: 29 },
EU: { currency: 'EUR', price: 27 },
GB: { currency: 'GBP', price: 23 },
IN: { currency: 'INR', price: 999 },
}
export async function GET(request: Request) {
const country = (request as any).geo?.country || 'US'
const plan = pricing[country] || pricing.US
return Response.json(plan)
}2. Authentication at the Edge
Validate JWTs without hitting your origin server:
// middleware.ts
import { jwtVerify } from 'jose'
export async function middleware(request: NextRequest) {
const token = request.cookies.get('session')?.value
if (!token) {
return NextResponse.redirect(new URL('/login', request.url))
}
try {
const secret = new TextEncoder().encode(process.env.JWT_SECRET)
await jwtVerify(token, secret)
return NextResponse.next()
} catch {
return NextResponse.redirect(new URL('/login', request.url))
}
}3. Rate Limiting
// Cloudflare Worker with rate limiting
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const ip = request.headers.get('CF-Connecting-IP') || 'unknown'
const key = `rate:${ip}`
const current = parseInt(await env.KV.get(key) || '0')
if (current >= 100) {
return Response.json(
{ error: 'Rate limit exceeded' },
{ status: 429, headers: { 'Retry-After': '60' } }
)
}
await env.KV.put(key, String(current + 1), { expirationTtl: 60 })
// Forward to origin
return fetch(request)
},
}
4. Image Optimization
// Resize and convert images at the edge
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url)
const width = parseInt(url.searchParams.get('w') || '800')
const format = url.searchParams.get('f') || 'webp'
const imageUrl = url.searchParams.get('url')
if (!imageUrl) {
return new Response('Missing url parameter', { status: 400 })
}
// Cloudflare Image Resizing
return fetch(imageUrl, {
cf: {
image: {
width,
format,
quality: 80,
fit: 'cover',
},
},
})
},
}
5. Feature Flags
// Edge-based feature flags — no latency overhead
export const runtime = 'edge'
const flags = {
newCheckout: { enabled: true, percentage: 25 },
darkMode: { enabled: true, percentage: 100 },
betaFeature: { enabled: false, percentage: 0 },
}
export async function GET(request: Request) {
const userId = request.headers.get('x-user-id') || 'anonymous'
const userFlags = Object.fromEntries(
Object.entries(flags).map(([key, flag]) => {
if (!flag.enabled) return [key, false]
// Deterministic bucketing based on user ID
const hash = hashCode(`${userId}:${key}`)
return [key, (hash % 100) < flag.percentage]
})
)
return Response.json(userFlags)
}
function hashCode(str: string): number {
let hash = 0
for (let i = 0; i < str.length; i++) {
hash = ((hash << 5) - hash) + str.charCodeAt(i)
hash |= 0
}
return Math.abs(hash)
}Edge Limitations
What You Can't Do at the Edge
| Limitation | Why | Workaround |
|---|---|---|
| No Node.js APIs | Edge uses V8, not Node | Use Web APIs (fetch, crypto, URL) |
| Limited CPU time | 10-50ms per request | Offload heavy work to serverless |
| No filesystem | Stateless by design | Use KV, D1, or external databases |
| No native modules | V8 isolates only | Use Wasm for native-speed code |
| Limited libraries | Many npm packages use Node APIs | Check compatibility first |
| No WebSockets (some platforms) | Connection limits | Use Durable Objects (Cloudflare) |
When NOT to Use Edge
- CPU-heavy tasks — Image processing, ML inference, video encoding
- Long-running operations — Anything over 30 seconds
- Complex database queries — Edge databases are limited
- Node.js-specific code —
fs,child_process, native addons
Platform Comparison
| Feature | Cloudflare Workers | Deno Deploy | Vercel Edge |
|---|---|---|---|
| Locations | 300+ | 35+ | 18+ (via Cloudflare) |
| Runtime | V8 isolates | Deno (V8) | V8 isolates |
| Language | JS/TS/Wasm | JS/TS | JS/TS |
| Cold start | < 1ms | < 10ms | < 10ms |
| CPU limit | 10-50ms | 50ms | 25ms |
| Database | KV, D1, R2 | Deno KV | Vercel KV, Postgres |
| Free tier | 100K req/day | 1M req/month | 1M req/month |
| Best for | APIs, middleware | Full apps | Next.js apps |
Quick Reference
// Cloudflare Worker
export default {
async fetch(request: Request, env: Env): Promise<Response> {
return new Response('Hello from Cloudflare!')
},
}
// Deno Deploy
Deno.serve((request: Request) => {
return new Response('Hello from Deno!')
})
// Vercel Edge (Next.js)
export const runtime = 'edge'
export async function GET() {
return Response.json({ message: 'Hello from Vercel Edge!' })
}
Summary
Edge computing is no longer experimental — it's a practical tool for reducing latency:
- Auth and middleware — Validate tokens at the edge, never hit origin
- Geo-routing — Serve different content based on location
- API responses — Cache and serve data from the nearest edge
- Feature flags — Zero-latency flag evaluation
- Rate limiting — Block abuse before it reaches your servers
Start with your highest-traffic, lowest-complexity endpoints. Move auth checks and API caching to the edge first, then expand as you learn the platform. The latency reduction is immediate and measurable.