Whether you’re a manual tester making a move or an automation engineer looking for your next role, walking into a QA interview unprepared is a fast way to get passed over. The good news: most QA interviews draw from the same pool of core topics. Know those topics cold, and you’ll handle whatever an interviewer throws at you.
This guide covers the questions you’re most likely to face, organized by experience level, with honest advice on what interviewers are actually looking for behind each one.
These come up in nearly every QA interview, regardless of seniority. Even experienced engineers get asked these foundational QA interview questions — interviewers use them as a quick baseline check before moving into deeper territory.
What is the difference between verification and validation?
Verification checks whether you built the product right (reviews, inspections, walkthroughs). Validation checks whether you built the right product (actual testing against user needs). The classic shorthand: verification is “did we build it correctly?” and validation is “did we build the correct thing?”
Interviewers want you to be crisp on this. Fumbling it suggests you haven’t thought carefully about where testing fits in the development lifecycle.
What’s the difference between severity and priority?
Severity measures the impact of a defect on system functionality. Priority measures how urgently it needs to be fixed. A typo on a landing page might be low severity but high priority if the CEO is presenting it tomorrow. A crash in an obscure admin feature might be high severity but low priority if three people use it per month.
This question tests whether you understand that bug triage involves judgment, not just technical assessment.
Name the different levels of software testing.
Unit testing, integration testing, system testing, and acceptance testing. Some interviewers expect you to mention regression testing separately; others consider it a practice rather than a level. Either framing is fine as long as you explain when each level applies and what it catches that the others miss.
What is a test plan, and what does it include?
A test plan defines the scope, approach, resources, and schedule for testing activities. It typically covers: objectives, test items, features to test, features not to test, approach, pass/fail criteria, suspension and resumption criteria, deliverables, environmental needs, responsibilities, and schedule.
Don’t just recite the list. Interviewers care more about whether you’ve actually written one and can explain the tradeoffs you made — like why you excluded certain features from scope or how you determined pass/fail criteria for a specific release.
These questions probe how you think about structuring your testing work.
When would you choose manual testing over automation?
Exploratory testing, usability evaluation, and one-off verification of complex visual layouts are all poor candidates for automation. Automation shines on regression suites, data-driven tests, and anything you’ll run hundreds of times. The honest answer also includes cost: if a feature changes every sprint, maintaining an automated test can cost more than running it manually.
Strong candidates acknowledge this tradeoff directly instead of defaulting to “automate everything.”
Explain the difference between smoke testing and sanity testing.
Smoke testing is a broad, shallow check that major features work after a new build. Sanity testing is a narrow, focused check on a specific area after a minor change. Smoke tests decide whether the build is stable enough to test further. Sanity tests decide whether a specific fix actually worked.
How do you write test cases when there’s no documentation?
This happens more often than most job descriptions admit. You talk to developers. You read the code if you can. You use the application and take notes. You ask product managers what the intended behavior is, and when they aren’t sure, you document the assumptions you’re making. Then you write test cases based on those assumptions and flag them for review.
Interviewers asking this want to see that you can function in ambiguous environments, which is most environments.
What is boundary value analysis, and when do you use it?
Boundary value analysis tests at the edges of input ranges — the minimum, maximum, and values just inside and outside those boundaries. If a field accepts 1–100, you test 0, 1, 2, 99, 100, and 101. Bugs cluster at boundaries far more often than in the middle of a valid range, so this technique gives you high defect-detection for low effort. Pair it with equivalence partitioning (testing one representative value from each valid and invalid partition) for solid functional coverage without writing hundreds of redundant cases.
If the role involves test automation, expect questions that go deeper than “which tools have you used.”
What’s your approach to building a test automation framework?
Talk about your actual experience rather than an idealized architecture. A solid answer covers: how you structured page objects or screen models, how you handled test data, how you managed environment configuration, how tests get triggered (CI pipeline, scheduled runs), and how you report results. Mention a specific framework you’ve built or contributed to — Selenium, Playwright, Cypress, whatever matches your background.
How do you decide what to automate?
High-frequency regression paths, stable features that rarely change, and tests with many data permutations are the usual starting points. Avoid claiming you’d automate everything. Interviewers who’ve actually managed automation suites know that over-automating creates a maintenance burden that can slow a team down. The best answer includes something you deliberately chose not to automate and why.
What’s the difference between Selenium and Playwright?
Selenium has been around since 2004 and supports the most languages and browsers. Playwright is newer, faster for modern web apps, and has built-in auto-waiting and better handling of iframes and shadow DOM. Neither is categorically better; the right choice depends on your team’s language preferences, browser support needs, and existing infrastructure.
If you’ve used both, say so and explain your preference with specifics.
These are where many candidates stumble, because there’s no textbook answer to memorize. Behavioral QA interview questions require you to draw from your own experience, and interviewers can tell when you’re fabricating.
Describe a time you found a critical bug close to release. What did you do?
Walk through a real situation. Interviewers are evaluating your communication skills as much as your technical judgment. Did you escalate clearly? Did you quantify the risk? Did you propose alternatives — like a targeted hotfix versus delaying the release? The specifics matter more than the outcome.
How do you handle disagreements with developers about whether something is a bug?
The experienced answer: you provide evidence. A clear reproduction, a reference to the spec or user story, and sometimes a recording. If the spec is ambiguous, you escalate to the product owner instead of arguing. QA engineers who can navigate this without creating friction are worth their weight in gold to hiring managers.
A feature works according to the spec, but you think the spec is wrong. What do you do?
You raise it. Write up why you think the spec doesn’t serve the user, provide examples, and bring it to the product owner or business analyst. You don’t silently pass the test because it technically meets acceptance criteria. This question reveals whether you see QA as checkbox compliance or genuine quality advocacy.
Preparing for QA interviews while job hunting? Browse hundreds of QA-specific roles — from manual testing to SDET positions — at QualityAssuranceJobs.com.
These feel softer, but they carry weight. Interviewers are mapping your experience to their team’s needs.
What QA certifications do you hold, and do they matter?
ISTQB is the most recognized. Whether it matters depends on the company — some job postings require it, others ignore it entirely. The honest take: certification proves you studied a body of knowledge, but it doesn’t prove you can test software well. If you have it, mention it. If you don’t, explain what you’ve learned through hands-on work instead.
How do you stay current with testing tools and practices?
Name specific sources: a subreddit you follow, a conference you attended, a tool you evaluated recently. Vague answers like “I read blogs and attend webinars” don’t stand out. Mention something concrete — maybe you evaluated Playwright last quarter, or you contributed to an open-source testing library, or you ran a lunch-and-learn for your team on contract testing.
Describe your experience with CI/CD pipelines.
Even if you’re not the one configuring Jenkins or GitHub Actions, you should understand how your tests integrate into the pipeline. Talk about where tests run (on merge, nightly, pre-deploy), how failures are handled (blocking versus non-blocking), and what you do when flaky tests erode team confidence in the suite.
Knowing common QA interview questions gets you partway there. The rest comes down to preparation habits:
Practice explaining your testing process out loud. Many candidates know the concepts but freeze when asked to articulate them in conversation. Run through your answers once or twice with a friend or a rubber duck.
Prepare two or three stories from your actual work. Behavioral questions dominate modern interviews, and “I would do X” is always weaker than “I did X, and here’s what happened.” Pick stories that show you communicating across teams, catching something others missed, or making a tough tradeoff.
Research the company’s product before the interview. Spend 20 minutes using it. Note one thing you’d test first and one potential risk area. Interviewers remember candidates who show genuine curiosity about their product.
Review the job posting line by line. If it mentions performance testing and you have experience with JMeter or k6, prepare to talk about it. If it mentions accessibility testing and you’ve never done it, be honest about that and talk about how you’d ramp up.
Finally, don’t underestimate the value of asking good questions back. “How does your team decide what to automate?” or “What does your release process look like?” shows that you think about QA as a system, not just a task list. Interviewers who spend their days screening candidates remember the people who asked sharp questions almost as much as the people who gave sharp answers.