Skip to main content

How to Build the Best Website with AI — A Complete Guide

March 12, 2026

AI coding assistants have fundamentally changed how we build websites. But there's a massive gap between developers who use AI to generate spaghetti code and those who use it to build production-quality applications. The difference isn't the AI — it's how you use it.

I've spent the last year building production websites almost entirely with AI assistants. This guide distills everything I've learned about getting consistently excellent output from tools like Claude Code, Cursor, and GitHub Copilot. Every technique here is something I use daily.

Let's get into it.

1. The Right Prompting Strategy

Most developers treat AI like a search engine: they type a vague question and hope for the best. That's the wrong mental model. Think of AI as a junior developer who is incredibly fast, knows every API ever written, but has no context about your project unless you give it.

Be Specific and Contextual

Bad prompt:

Make a navbar

Good prompt:

Create a responsive navigation component in Next.js App Router using TypeScript.
It should have:
- A logo on the left linking to /
- Links to /blog, /projects, /about on the right
- A mobile hamburger menu that slides in from the right
- Use Tailwind CSS for styling
- Follow the existing component patterns in app/components/

The second prompt gives the AI constraints, technology context, and expectations. The output will be dramatically better.

The "Smelly Code" Prompt Technique

This is one of my favorite techniques. After generating any significant piece of code, immediately follow up with a code smell review:

Review the code you just generated for:
- Code smells and anti-patterns
- Unused variables or imports
- Missing error handling
- Accessibility issues
- Performance concerns
- Any hardcoded values that should be constants or env vars
- Violations of DRY principle

Be brutally honest. Pretend you're a senior engineer who's annoyed
about code quality.

This catches an enormous number of issues. AI assistants are actually very good at reviewing code — sometimes better than they are at generating it — because review is a more constrained task.

Chain Your Prompts

Don't try to build everything in one massive prompt. Break it into steps:

  1. Architecture first: "Design the component structure for a dashboard with these features..."
  2. Implementation: "Now implement the UserStats component from that plan..."
  3. Review: "Review this implementation for code smells and edge cases..."
  4. Testing: "Write unit tests for this component covering happy path and error states..."

Each step gives the AI a focused task with clear context from the previous step.

2. WCAG & Accessibility

This is where most AI-generated code falls short. AI assistants will happily produce beautiful-looking markup that is completely inaccessible to screen readers, keyboard users, and people with disabilities. You have to explicitly ask for accessibility.

Prompt for Accessibility from the Start

Build an image gallery component that is fully WCAG 2.1 AA compliant.
Requirements:
- All images must have descriptive alt text
- Gallery must be keyboard navigable (arrow keys)
- Focus indicators must be visible
- Use semantic HTML (not divs for everything)
- Include appropriate ARIA labels
- Ensure 4.5:1 color contrast ratio for all text
- Support reduced-motion preferences

Before and After: Semantic HTML

AI-generated without accessibility prompt:

<div class="nav">
  <div class="nav-item" onclick="navigate('/home')">Home</div>
  <div class="nav-item" onclick="navigate('/about')">About</div>
  <div class="nav-item" onclick="navigate('/contact')">Contact</div>
</div>

AI-generated with accessibility prompt:

<nav aria-label="Main navigation">
  <ul role="list">
    <li>
      <a href="/home" aria-current="page">Home</a>
    </li>
    <li>
      <a href="/about">About</a>
    </li>
    <li>
      <a href="/contact">Contact</a>
    </li>
  </ul>
</nav>

The second version uses semantic <nav>, proper <a> tags instead of clickable divs, aria-label for the navigation region, and aria-current for the active page. Huge difference for screen reader users, and it's all because of how you prompt.

The Accessibility Audit Prompt

After building any user-facing component, run this:

Audit this component for WCAG 2.1 AA compliance:
- Are all interactive elements keyboard accessible?
- Do all images have meaningful alt text?
- Is the heading hierarchy logical (h1 > h2 > h3)?
- Are form inputs properly labeled?
- Do color combinations meet 4.5:1 contrast ratio?
- Are focus states visible and clear?
- Does it work with prefers-reduced-motion?
- Are ARIA roles and properties used correctly?

3. SEO Best Practices

AI can generate excellent SEO markup, but only if you know what to ask for. Most developers forget half the SEO essentials.

The SEO-Complete Page Prompt

Generate a Next.js page with complete SEO setup including:
- Dynamic meta title and description
- Open Graph tags (og:title, og:description, og:image, og:type)
- Twitter card meta tags
- JSON-LD structured data for Article schema
- Canonical URL
- Proper heading hierarchy (single h1)
- Semantic HTML structure

JSON-LD Structured Data

This is something most developers skip entirely, but it's incredibly valuable for search visibility. Here's what you should ask AI to generate:

// Ask AI: "Generate JSON-LD structured data for a blog post page"
export default function BlogPost({ post }) {
  const jsonLd = {
    '@context': 'https://schema.org',
    '@type': 'Article',
    headline: post.title,
    description: post.summary,
    author: {
      '@type': 'Person',
      name: 'Your Name',
      url: 'https://yoursite.com',
    },
    datePublished: post.publishedAt,
    dateModified: post.updatedAt || post.publishedAt,
    image: post.ogImage,
    publisher: {
      '@type': 'Organization',
      name: 'Your Site',
      logo: {
        '@type': 'ImageObject',
        url: 'https://yoursite.com/logo.png',
      },
    },
  };

  return (
    <>
      <script
        type="application/ld+json"
        dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd) }}
      />
      <article>{/* content */}</article>
    </>
  );
}

Sitemap and Robots

Always prompt AI for dynamic sitemap generation. In Next.js:

// app/sitemap.ts
export default async function sitemap() {
  const posts = getBlogPosts();

  const blogEntries = posts.map((post) => ({
    url: `https://yoursite.com/blog/${post.slug}`,
    lastModified: new Date(post.metadata.publishedAt),
    changeFrequency: 'monthly' as const,
    priority: 0.8,
  }));

  return [
    { url: 'https://yoursite.com', priority: 1.0 },
    { url: 'https://yoursite.com/blog', priority: 0.9 },
    ...blogEntries,
  ];
}

4. Clean Code & Best Practices

AI loves to generate code that works but violates every principle of clean software engineering. You need to set guardrails.

The Clean Code Prompt

Refactor this code following these principles:
- DRY: Extract duplicated logic into shared functions
- Single Responsibility: Each function does one thing
- Descriptive naming: Variables and functions describe their purpose
- No magic numbers: Extract constants
- TypeScript strict mode: No 'any' types, proper interfaces
- Small functions: Nothing over 30 lines
- Early returns: Avoid deep nesting

Before and After: Component Structure

Over-engineered AI output (common problem):

// AI sometimes generates unnecessary abstractions
const ButtonFactory = createFactory({
  variants: {
    primary: createVariant({ base: 'bg-blue-500', hover: 'bg-blue-600' }),
    secondary: createVariant({ base: 'bg-gray-500', hover: 'bg-gray-600' }),
  },
  sizes: createSizeMap({ sm: 'px-2 py-1', md: 'px-4 py-2', lg: 'px-6 py-3' }),
});

What you actually needed:

interface ButtonProps {
  variant?: 'primary' | 'secondary';
  size?: 'sm' | 'md' | 'lg';
  children: React.ReactNode;
  onClick?: () => void;
}

const styles = {
  variant: {
    primary: 'bg-blue-500 hover:bg-blue-600 text-white',
    secondary: 'bg-gray-500 hover:bg-gray-600 text-white',
  },
  size: {
    sm: 'px-2 py-1 text-sm',
    md: 'px-4 py-2 text-base',
    lg: 'px-6 py-3 text-lg',
  },
} as const;

export function Button({
  variant = 'primary',
  size = 'md',
  children,
  onClick,
}: ButtonProps) {
  return (
    <button
      className={`rounded font-medium ${styles.variant[variant]} ${styles.size[size]}`}
      onClick={onClick}
    >
      {children}
    </button>
  );
}

Tell the AI: "Keep it simple. No factory patterns, no class hierarchies. Just a clean functional component with TypeScript interfaces."

5. Security Concerns

This is critical. AI assistants can introduce security vulnerabilities without realizing it, especially around user input handling and authentication.

The Security Audit Prompt

Review this code for security vulnerabilities:
- XSS: Is user input properly sanitized before rendering?
- CSRF: Are state-changing requests protected?
- SQL Injection: Are queries parameterized?
- Input validation: Are all inputs validated and typed?
- Secrets: Are any API keys or credentials hardcoded?
- Dependencies: Are we using any packages with known vulnerabilities?
- Headers: Are security headers (CSP, HSTS, X-Frame-Options) configured?
- Auth: Are tokens stored securely? Are routes properly protected?

Common AI Security Mistakes

AI will often generate code like this:

// DANGEROUS: AI-generated code that uses dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{ __html: userComment }} />

Always follow up with: "Is there any user input being rendered without sanitization?" The fix:

import DOMPurify from 'dompurify';

// Sanitized version
<div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(userComment) }} />

Environment Variables

Never let AI hardcode secrets. If you see this in generated code, flag it immediately:

// WRONG: AI sometimes does this in examples
const apiKey = 'sk-1234567890abcdef';

// CORRECT: Always use environment variables
const apiKey = process.env.API_SECRET_KEY;
if (!apiKey) throw new Error('API_SECRET_KEY is not configured');

Prompt AI with: "Never hardcode any API keys, secrets, or credentials. Always use environment variables and include validation that they exist."

Content Security Policy

Ask AI to generate proper CSP headers for your framework:

// next.config.js
const securityHeaders = [
  {
    key: 'Content-Security-Policy',
    value: [
      "default-src 'self'",
      "script-src 'self' 'unsafe-inline'",
      "style-src 'self' 'unsafe-inline'",
      "img-src 'self' data: https:",
      "font-src 'self'",
      "connect-src 'self' https://api.example.com",
    ].join('; '),
  },
  { key: 'X-Frame-Options', value: 'DENY' },
  { key: 'X-Content-Type-Options', value: 'nosniff' },
  { key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
];

6. Performance Optimization

AI-generated code often works correctly but is not optimized. You need to explicitly ask for performance considerations.

The Performance Review Prompt

Review this code for performance issues:
- Are there unnecessary re-renders in React components?
- Should any components use React.memo or useMemo?
- Are images optimized (next/image, WebP, lazy loading)?
- Is there code that could be split or lazy loaded?
- Are there N+1 query problems in data fetching?
- Are expensive computations cached?
- Would any API calls benefit from stale-while-revalidate?

Image Optimization

AI often generates basic <img> tags. Always prompt for optimized images:

// AI default output
<img src="/hero.png" alt="Hero image" />

// What you should ask for
import Image from 'next/image';

<Image
  src="/hero.png"
  alt="Descriptive alt text for the hero section"
  width={1200}
  height={630}
  priority // Above the fold
  placeholder="blur"
  blurDataURL={shimmer(1200, 630)}
/>

Code Splitting

// Ask AI: "Lazy load this heavy component"
import dynamic from 'next/dynamic';

const HeavyChart = dynamic(() => import('@/components/chart'), {
  loading: () => <ChartSkeleton />,
  ssr: false, // If it uses browser-only APIs
});

Core Web Vitals Prompt

Analyze this page for Core Web Vitals impact:
- LCP: What's the largest contentful paint element? Is it optimized?
- FID/INP: Are there long tasks blocking interactivity?
- CLS: Are there any layout shifts from dynamic content?
Suggest specific improvements for each metric.

7. Testing Strategy

AI is surprisingly good at writing tests — if you prompt it correctly. The key is being specific about what scenarios to cover.

The Testing Prompt

Write tests for this component covering:
- Happy path: renders correctly with valid props
- Edge cases: empty data, null values, very long strings
- Error states: API failure, network timeout, invalid input
- Accessibility: proper ARIA attributes, keyboard interaction
- User interactions: clicks, form submissions, navigation

Use Vitest and React Testing Library. Follow the Arrange-Act-Assert pattern.
No snapshot tests. Test behavior, not implementation details.

Example Test Output

import { render, screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi } from 'vitest';
import { SearchBar } from './search-bar';

describe('SearchBar', () => {
  it('calls onSearch with the input value on submit', async () => {
    const onSearch = vi.fn();
    render(<SearchBar onSearch={onSearch} />);

    const input = screen.getByRole('searchbox', { name: /search/i });
    await fireEvent.change(input, { target: { value: 'next.js' } });
    await fireEvent.submit(input.closest('form')!);

    expect(onSearch).toHaveBeenCalledWith('next.js');
  });

  it('does not submit with empty input', async () => {
    const onSearch = vi.fn();
    render(<SearchBar onSearch={onSearch} />);

    const form = screen.getByRole('search');
    await fireEvent.submit(form);

    expect(onSearch).not.toHaveBeenCalled();
  });

  it('is accessible via keyboard', async () => {
    render(<SearchBar onSearch={vi.fn()} />);

    const input = screen.getByRole('searchbox');
    input.focus();
    expect(document.activeElement).toBe(input);
  });
});

The key insight: tell AI "no snapshot tests" and "test behavior, not implementation." Otherwise it'll generate brittle tests that break every time you change a CSS class.

8. Responsive Design

AI assistants tend to generate either desktop-only or overly complicated responsive code. The fix is being explicit about your responsive strategy.

Mobile-First Prompt

Build this component using a mobile-first approach:
- Start with the mobile layout as the default
- Use min-width breakpoints to enhance for larger screens
- Ensure touch targets are at least 44x44px
- Use fluid typography with clamp()
- Test that nothing overflows on 320px viewport width
- Use Tailwind's responsive prefixes (sm:, md:, lg:)

Fluid Typography

/* Ask AI to generate fluid type scales */
h1 {
  font-size: clamp(1.75rem, 4vw + 0.5rem, 3rem);
  line-height: 1.2;
}

p {
  font-size: clamp(1rem, 1vw + 0.75rem, 1.25rem);
  line-height: 1.6;
}

Container Queries

For truly component-based responsive design, prompt AI for container queries:

.card-container {
  container-type: inline-size;
}

@container (min-width: 400px) {
  .card {
    display: grid;
    grid-template-columns: 200px 1fr;
    gap: 1rem;
  }
}

@container (min-width: 600px) {
  .card {
    grid-template-columns: 300px 1fr;
  }
}

9. Error Handling & Edge Cases

This is where AI-generated code most often falls apart in production. AI tends to only handle the happy path unless you explicitly ask otherwise.

The Robustness Prompt

Add comprehensive error handling to this code:
- What happens if the API returns an error?
- What happens if the data is empty or null?
- What happens if the network is offline?
- What happens if the user double-clicks the submit button?
- What should the loading state look like?
- What should the empty state look like?
- Add an error boundary for unexpected crashes

Show me the loading, empty, error, and success states.

Before and After: Data Fetching

AI default (happy path only):

export default async function UserProfile({ userId }: { userId: string }) {
  const user = await fetchUser(userId);

  return (
    <div>
      <h1>{user.name}</h1>
      <p>{user.bio}</p>
    </div>
  );
}

After robustness prompt:

import { notFound } from 'next/navigation';
import { ErrorFallback } from '@/components/error-fallback';

export default async function UserProfile({ userId }: { userId: string }) {
  if (!userId || typeof userId !== 'string') {
    notFound();
  }

  let user;
  try {
    user = await fetchUser(userId);
  } catch (error) {
    console.error(`Failed to fetch user ${userId}:`, error);
    return <ErrorFallback message="Unable to load profile. Please try again." />;
  }

  if (!user) {
    notFound();
  }

  return (
    <div>
      <h1>{user.name || 'Anonymous User'}</h1>
      <p>{user.bio || 'No bio provided.'}</p>
    </div>
  );
}

The second version handles missing IDs, fetch failures, null responses, and missing fields. That's the difference between a demo and production code.

10. Code Review with AI

One of the most underused capabilities of AI assistants is code review. You can get a thorough review of your entire codebase by prompting correctly.

The Senior Engineer Prompt

Pretend you're a principal engineer conducting a thorough code review
of this pull request. Be critical and specific. For each issue:

1. Explain what's wrong
2. Explain why it matters (security? performance? maintainability?)
3. Show the fix

Check for:
- Logic errors and off-by-one bugs
- Race conditions in async code
- Memory leaks (event listeners, subscriptions)
- Missing cleanup in useEffect
- Inconsistent error handling
- Type safety gaps (any, as assertions)
- Dead code and unused imports

The Architecture Review

Review the architecture of this project:
- Are components properly separated by concern?
- Is state management appropriate for the complexity?
- Are there circular dependencies?
- Is the folder structure scalable?
- Are there any coupling issues between modules?
- Would you recommend any structural changes before this grows?

This kind of high-level review is incredibly valuable and is something AI can do surprisingly well because it can analyze an entire codebase at once.

11. Common Mistakes AI Makes

Knowing what AI gets wrong is just as important as knowing what it gets right. Here are the most common issues I've encountered:

Hallucinated Packages and APIs

AI will sometimes suggest packages that don't exist or use API signatures that were never real. Always verify:

// AI might generate:
import { useOptimistic } from 'react'; // This exists in React 19
import { useFormState } from 'react-dom'; // Renamed to useActionState in React 19

// Always check: "Is this API real? What version introduced it?"

Outdated Patterns

AI training data has a cutoff. It might suggest:

  • getServerSideProps instead of Next.js App Router patterns
  • Class components instead of hooks
  • moment.js instead of date-fns or Intl.DateTimeFormat
  • Old Tailwind CSS v3 syntax when you're on v4

Fix: Always specify your exact versions in prompts: "I'm using Next.js 16, React 19, Tailwind CSS 4.2, and TypeScript 5.7."

Ignoring Existing Patterns

This is the biggest one. AI will generate code in its own style, ignoring the patterns already established in your codebase. If your project uses a specific component pattern, naming convention, or folder structure, AI won't know unless you tell it.

Fix: Reference existing code: "Follow the same pattern used in app/components/post-card.tsx for component structure and styling."

Over-Engineering

AI loves to generate abstractions. It'll create a factory pattern when you need a simple function, or build an entire state management system when useState would suffice.

Fix: "Keep this as simple as possible. No unnecessary abstractions, patterns, or indirection. A junior developer should be able to understand this code immediately."

12. The CLAUDE.md / Rules File Approach

This is the single most impactful technique for improving AI output quality. Project instruction files give AI persistent context about your codebase, conventions, and preferences.

What Are Project Instruction Files?

Different tools use different files:

  • Claude Code: CLAUDE.md
  • Cursor: .cursorrules
  • GitHub Copilot: .github/copilot-instructions.md
  • Windsurf: .windsurfrules

These files sit in your project root and are automatically read by the AI assistant on every interaction.

What to Include

A good project instruction file should cover:

# Project: My App

## Tech Stack
- Next.js 16 (App Router)
- React 19
- TypeScript (strict mode)
- Tailwind CSS 4.2
- pnpm for package management

## Commands
- `pnpm dev` - Start dev server
- `pnpm build` - Production build
- `pnpm test` - Run tests

## Conventions
- Components go in app/components/
- Use functional components with TypeScript interfaces
- Shared icons go in icons.tsx with a `size` prop
- Use the existing PostCard component  don't duplicate markup
- Mobile nav and desktop nav are separate components
- Dark mode uses the .dark class on <html>

## Don'ts
- Never use `any` type
- Never hardcode API keys or secrets
- Never use default exports for components (except pages)
- Never install new dependencies without asking
- Don't use deprecated APIs (getServerSideProps, etc.)

Why This Works

Without a rules file, every prompt starts from zero. The AI has no idea about your project structure, conventions, or preferences. You end up repeating the same corrections over and over.

With a rules file, the AI starts every interaction with full context. It knows your tech stack, your patterns, your preferences, and your constraints. The improvement in output quality is immediate and dramatic.

I've seen the rules file approach cut the number of correction rounds by more than half. It's a small upfront investment that pays for itself on the first day.

Keep It Updated

Treat your rules file like documentation — update it as your project evolves. When you find yourself correcting the AI about the same thing twice, add it to the rules file. Over time, it becomes a living guide that makes every AI interaction faster and more accurate.

Conclusion

Building great websites with AI isn't about writing the perfect prompt. It's about building a system: project instruction files for context, structured prompts for each concern (accessibility, security, performance), and review prompts to catch issues before they hit production.

Here's the workflow I recommend:

  1. Set up your rules file before writing any code with AI
  2. Prompt in stages: architecture, implementation, review, testing
  3. Always audit for accessibility, security, and performance — AI won't do this automatically
  4. Verify everything: check that packages exist, APIs are current, and patterns match your codebase
  5. Use AI as a reviewer, not just a generator — the "smelly code" and "senior engineer" prompts are invaluable

The developers who build the best websites with AI aren't the ones who type the fanciest prompts. They're the ones who understand what AI is good at (speed, breadth of knowledge, consistency) and what it's bad at (context, judgment, staying current) — and who build workflows that amplify the strengths while guarding against the weaknesses.

AI is the most powerful development tool we've ever had. Use it deliberately.

Recommended Posts