Skip to main content

Types of Testing

Different situations call for different testing approaches. Knowing when to use each type is what separates a good tester from a great one.

Functional Testing Types

Smoke Testing

A quick, shallow test to check if the most critical features work. Run it after a new build to decide if it's stable enough for deeper testing.

Think of it like: Turning on a new appliance to see if it powers up before reading the manual.

What to check:

  • Can users log in?
  • Does the main page load?
  • Can users complete the primary workflow?

If smoke tests fail, reject the build immediately. Don't waste time on detailed testing.

Sanity Testing

A narrow, focused test on a specific area after a bug fix or minor change. Unlike smoke testing, sanity testing goes deeper into one area.

Think of it like: After a plumber fixes a pipe, you check that specific faucet works — not every faucet in the house.

Regression Testing

Re-running existing tests after code changes to make sure nothing that worked before is now broken.

When to run it:

  • After bug fixes
  • After new features are added
  • After refactoring
  • Before every release

Regression testing is the #1 candidate for automation because it's repetitive and must be thorough.

Exploratory Testing

Unscripted testing where the tester simultaneously learns the system, designs tests, and executes them. No predefined test cases — you follow your instincts and curiosity.

Techniques:

  • Tours: Navigate the app like a tourist — visit every page, click every button
  • Scenario-based: "What would a confused user do?"
  • Risk-based: Focus on areas most likely to break

When it shines:

  • New features with unclear requirements
  • When you've run all scripted tests and want to go deeper
  • Time-constrained testing sessions

User Acceptance Testing (UAT)

Testing done by actual users or stakeholders to verify the software meets their business needs. This is the final gate before release.

Key characteristics:

  • Testers are business users, not QA engineers
  • Tests are based on real-world scenarios
  • Pass/fail is based on business criteria, not technical specs

Non-Functional Testing Types

Performance Testing

Does the system handle the expected load?

TypeQuestion It Answers
Load testingCan it handle expected traffic?
Stress testingWhat happens when traffic exceeds capacity?
Spike testingCan it handle sudden traffic bursts?
Endurance testingDoes it remain stable over extended periods?

Usability Testing

Is the software easy to use? Watch real users attempt tasks and note where they struggle, get confused, or give up.

Security Testing

Can the system be exploited? Check for:

  • SQL injection
  • Cross-site scripting (XSS)
  • Authentication bypass
  • Data exposure
  • Broken access control

Compatibility Testing

Does the software work across different:

  • Browsers (Chrome, Firefox, Safari, Edge)
  • Operating systems (Windows, macOS, Linux, iOS, Android)
  • Screen sizes and resolutions
  • Network conditions (fast, slow, offline)

Testing Levels

Testing happens at multiple levels of the application:

    ┌─────────────────────────┐
       Acceptance Testing       Does it meet user needs?
    ├─────────────────────────┤
        System Testing          Does the whole system work?
    ├─────────────────────────┤
      Integration Testing       Do modules work together?
    ├─────────────────────────┤
        Unit Testing            Does each piece work alone?
    └─────────────────────────┘
  • Unit tests are fast and cheap — catch bugs early
  • Acceptance tests are slow and expensive — catch requirement mismatches

A healthy test strategy uses all four levels, with more unit tests than acceptance tests (the "test pyramid").