Skip to main content

How I Got 100 on Google PageSpeed Insights (and How I Keep It There)

April 30, 2026

Performance scores are not a destination — they are a current reading. You can hit 100 today and be at 85 in two months because you added a new font, a heavier library, or a lazy-loaded component that isn't actually lazy anymore. The score matters, but the habit matters more.

This post covers the specific fixes that got my Next.js site to 100 on desktop and pushed mobile from 90 toward 95+, and the monthly routine I use to stop it from quietly degrading.

What the Audit Actually Found

I ran a PageSpeed Insights audit on both mobile and desktop and exported the full reports. Here is what was flagged:

Mobile (score: 90)

MetricValueStatus
Largest Contentful Paint2.7sNeeds improvement (threshold: 2.5s)
Total Blocking Time270msNeeds improvement (threshold: 200ms)
Cumulative Layout Shift0Perfect
First Contentful Paint0.9sGood
Speed Index3.6sNeeds improvement

Desktop (score: 100) — all metrics already green, but with flagged opportunities:

  • Render-blocking requests: 160ms savings possible
  • Unused JavaScript: 86 KiB
  • Legacy JavaScript: 14 KiB
  • 3 long main-thread tasks

Desktop was perfect on score but still had headroom. Mobile had two metrics in the orange zone.

The Fixes, One by One

1. Add priority to the Above-Fold Image

The LCP element on my homepage is the profile picture in the <h1>. It was an <Image> from next/image without the priority prop.

Without priority, Next.js treats an image as non-critical. The browser discovers it during HTML parsing, waits for the layout to be calculated, then requests it. With priority, Next.js adds a <link rel="preload" as="image" fetchpriority="high"> tag to <head> — the browser fetches it immediately alongside the HTML, before the layout engine even runs.

// Before
<Image src="/profile.png" alt="Sabaoon" width={52} height={52} />

// After
<Image src="/profile.png" alt="Sabaoon" width={52} height={52} priority />

One attribute. The most impactful change in the entire audit. If you have an image above the fold and your LCP is slow, this is the first thing to check.

Rule: any image that is visible without scrolling on first load should have priority.

2. Reduce Font Weights

I was loading Ubuntu in four weights: 300, 400, 500, and 700. Each weight is a separate network request for a separate font file. I was using all four, but the 500 weight (font-medium) was the least critical — dropping it saves one font file request on every page load, with a negligible visual difference since browsers interpolate between 400 and 700.

// Before: 4 font files loaded
const ubuntu = Ubuntu({
  weight: ['300', '400', '500', '700'],
  display: 'swap',
})

// After: 3 font files
const ubuntu = Ubuntu({
  weight: ['300', '400', '700'],
  display: 'swap',
  preload: true,
})

I also added preload: true explicitly to make it clear this is the primary font and should be prioritised. next/font defaults to preloading when preload is not specified, but being explicit documents the intent.

Rule: only load font weights you verifiably use. Check each weight class against your Tailwind config or CSS. Anything unused is a free network request to eliminate.

3. Lazy-Load Heavy Client Components

The Total Blocking Time (270ms) was driven by JavaScript being parsed and executed on the main thread during page load. I found two large client components in the initial bundle that didn't need to be there:

  • AuthButton — ~350 lines handling Google, Apple, and email authentication
  • MobileProfileSheet — another ~350 lines with the same auth logic for mobile

Both components check localStorage before touching Firebase, so most visitors (who aren't signed in) never trigger the auth code anyway. But the 700 lines of JavaScript were still being parsed on every page load, contributing to long tasks.

The fix: move them into deferred chunks using next/dynamic.

AuthButton lives in a server component (nav.tsx), so ssr: false isn't allowed directly there. The solution is a thin client wrapper:

// auth-button-lazy.tsx
'use client'
import dynamic from 'next/dynamic'

const AuthButton = dynamic(
  () => import('./auth-button').then(m => ({ default: m.AuthButton })),
  { ssr: false, loading: () => null }
)

export function AuthButtonLazy() {
  return <AuthButton />
}
// nav.tsx (server component) — use the wrapper instead
import { AuthButtonLazy } from './auth-button-lazy'

For MobileProfileSheet in the already-client mobile-nav.tsx:

const MobileProfileSheet = dynamic(
  () => import('./mobile-profile-sheet').then(m => ({ default: m.MobileProfileSheet })),
  { ssr: false, loading: () => null }
)

These components now load in a separate chunk, after the main content is visible and interactive. The main-thread parse work is deferred to idle time.

Rule: any client component that is not immediately visible or interactive on page load is a candidate for next/dynamic. Auth modals, sidebar panels, heavy form components — none of these need to block first paint.

4. What I Tried That Made Things Worse

I added experimental.optimizeCss: true to next.config.js. This feature uses critters to inline critical CSS and defer the rest. In theory it should improve render-blocking scores.

In practice, my desktop score dropped from 100 to 95 immediately. The experimental CSS inlining introduced a layout shift that Lighthouse penalised. I reverted it the same day.

// Reverted — caused desktop score regression from 100 to 95
experimental: {
  optimizeCss: true,
}

The lesson: experimental Next.js features are worth testing, but always measure before and after, and have a one-commit revert ready.

The Monthly Routine

A score is only meaningful at the moment it is measured. New dependencies, new components, and new content all shift performance. I run this checklist once a month — it takes about 20 minutes.

Week 1 of Each Month: Run the Audit

  1. Open PageSpeed Insights
  2. Test both mobile and desktop
  3. Export the HTML report (File → Save Page As) or screenshot it
  4. Compare against last month's numbers

I keep a simple log:

2026-03-01  Mobile: 90  Desktop: 100
2026-04-01  Mobile: 90  Desktop: 100
2026-04-30  Mobile: 92  Desktop: 100  (after priority + lazy-load fixes)

If any metric regresses by more than 5 points, I investigate before merging anything new.

What to Look For Each Month

LCP creeping up?

  • Did you add a new above-fold image without priority?
  • Did a new font weight or family get added?
  • Is the LCP element changing? (Use the LCP breakdown in PageSpeed)

TBT increasing?

  • Did a new dependency get added to the client bundle?
  • Run ANALYZE=true pnpm build and check the bundle analyser for unexpected large chunks
  • Look for imports of heavy libraries (date-fns, lodash, chart.js) in client components that could be dynamically imported or replaced with lighter alternatives

CLS appearing?

  • Are images missing width and height? (Causes layout shifts as they load)
  • Are fonts loading without display: swap? (Causes text to disappear then reappear)
  • Did any new component inject DOM that shifts existing content?

Unused JS growing?

  • Check for libraries imported in server components that are accidentally pulled into the client bundle
  • Look for 'use client' components that import heavy utilities they use server-side

The Bundle Analyser

The most useful tool for diagnosing JS bloat:

ANALYZE=true pnpm build

This opens a visual treemap of your JavaScript bundles. You can see exactly which package is taking how many kilobytes. Common culprits I find in Next.js projects:

  • firebase — even with lazy imports, some of the app initialization code can leak into the main bundle
  • jsPDF, canvas — should only be loaded on certificate/download pages, never in layout
  • next-mdx-remote — should only be in server components, never client
  • Large icon libraries — if you import lucide-react or react-icons, you want tree-shaking working correctly

Checking for Regressions in CI

For a more automated approach, add a Lighthouse CI check to your pipeline. A basic GitHub Actions config:

- name: Run Lighthouse CI
  uses: treosh/lighthouse-ci-action@v11
  with:
    urls: |
      https://www.sabaoon.dev
    budgetPath: ./lighthouse-budget.json
    uploadArtifacts: true

With a budget file that fails the build if scores drop below thresholds:

[
  {
    "path": "/*",
    "timings": [
      { "metric": "largest-contentful-paint", "budget": 2500 },
      { "metric": "total-blocking-time", "budget": 200 }
    ],
    "scores": [
      { "metric": "performance", "minScore": 90 }
    ]
  }
]

This turns performance into a gate rather than a periodic check. A PR that would have dropped the score to 85 fails in CI before it reaches production.

The Things That Don't Move the Needle

PageSpeed will flag things that sound alarming but have no practical impact on your score:

  • "Serve static assets with an efficient cache policy" — if you're on Vercel or Netlify, this is already handled
  • "5 user timing marks" — informational, not scored
  • "Legacy JavaScript" for 14 KiB — this is a real issue but ranks low compared to LCP and TBT
  • "Avoid enormous network payloads" at 516 KiB — for a blog/portfolio site, this is fine

Focus on LCP, TBT, and CLS first. They are the three metrics with the most direct scoring impact. Everything else is marginal.

Summary

The fixes that moved the score:

  1. priority on the LCP image — single biggest impact, one word of code
  2. Reduce font weights to what you actually use
  3. next/dynamic for heavy client components not needed on first paint
  4. Measure before touching experimental features

The habit that keeps it there:

  • Monthly PageSpeed run, results logged
  • Bundle analyser before merging large new dependencies
  • Lighthouse CI budget in the pipeline to catch regressions automatically

Performance is not a problem you solve once. It's a property you maintain.

Recommended Posts