Skip to main content
AI for Developers·Lesson 2 of 5

Prompting for Code

Good prompts are not longer — they are more constrained. This lesson focuses on what works in real repositories, not toy one-liners.

Give the model a job title

Start with role and objective in one or two lines:

You are a senior TypeScript engineer. Refactor the following function to use early returns,
preserve behavior, and add a one-line comment only where non-obvious.

That single framing reduces rambling and wrong-language answers.

Constraints beat adjectives

Replace "make it better" with checklists the model can verify:

  • Language and runtime (Node 20, browser, React Server Components, etc.)
  • Style (ESLint config name, or "match surrounding files")
  • What not to change (public API, file names, test names)
  • Output format: only the changed file in a fenced block, or a unified diff

Few-shot by pointing, not pasting

When behavior is subtle, show one minimal example of input → output in the prompt instead of describing it in prose. For refactor tasks, a tiny before/after pair beats a paragraph.

The review pass

Treat first output as a draft. A second pass with a fixed template catches most issues:

Review the code you wrote for:
- Correct types and null handling
- Edge cases for empty arrays/strings
- Unnecessary dependencies in hooks (if React)
- Performance hotspots in hot paths

Run linters and tests locally — models cannot feel production traffic.

Context windows for code

When the codebase is large:

  • Paste only the relevant file(s) or symbols.
  • If your tool supports @ references or project index, use them instead of dumping the repo.
  • Summarize architecture in 5–10 lines before pasting files so the model knows where boundaries are.

Key takeaways

  • Role + constraints + output format = fewer retries.
  • Examples beat lengthy descriptions for tricky behavior.
  • Always run automated checks; the model assumes a green path unless you describe failures.