Salesforce, Python, SQL, & other ways to put your data where you need it

Need event music? 🎸

Live and recorded jazz, pop, and meditative music for your virtual conference / Zoom wedding / yoga class / private party with quality sound and a smooth technical experience

Can usability testing be automated?

07 Jul 2025 🔖 web development
💬 EN

Table of Contents

Is there a place for browser automation (e.g. Playwright, Selenium, Puppeteer, Cypress) in the art of “usability testing” (à la Don’t Make Me Think)? Or should 100% of usability testing always execute on the platform known as “humans”? 😉

My prompt

I asked an LLM, since I have absolutely no idea.

I believe that “usability testing” of a web site or web design system tends to focus pretty strongly on actually watching humans try to use a web product interactively, and taking notes.

I’m pretty sure that automated testing doesn’t really feature prominently in “usability testing.”

Nevertheless, part of me is convinced that surely, automation with browser-driving test automation frameworks like Playwright, Cypress, Selenium, Puppeteer, etc. must be able to HELP in some way.

After all, once a real human surfaces a problem in the product’s usability, you’d probably want to automatically regression-test that you fixed it and to detect if it if ever crops up again.

  1. Can you give me some thoughts about what percent of a staff member’s time who’s leading usability testing should be dedicated to traditional human processes, and what percent of their attention should go toward collaborating with traditional QAs and developers who can help rally machines to amplify success?
  2. Can you give me some compare-and-contrast of the kinds of things that the traditional human usability testing would hope to surface, vs. the kinds of things that should still be part of the attention span of a staff member leading “usability testing” (as opposed to, say, generic E2E regression testing, which hopefully is already happening anyway as part of product development)?
  3. Can you further compare-and-contrast the kinds of automated-test-based things that are specifically the concern of a staff member who’s focused on running “usability tests” for a product, as opposed to “the rest of” the automated UX tests that the product team needs to be concerned about?

Response summary

Watching humans use a website only occasionally surfaces usability problems that machines can understand, like:

  • “the button isn’t clickable at all”
  • “the button isn’t enough pixels large to tap on a small device if your hands are shaky”
  • “the button doesn’t contrast enough from the background it’s on”
  • “the search box returns 0 results when the user misspells their search word”

You’ll never be able to teach the machine how to dummy-check the many cognitive usability problems with a user interface that watching humans use a website is likely to surface, like:

  • “the human never even noticed that there was a button on the page at all, because a strikingly beautiful image on the page distracted them, and they forgot what they were up to”

Here’s what the LLM had to say in full:

Usability testing is almost entirely human, not automation

The core of usability testing is direct observation of real users as they interact with the product, surfacing pain points, confusion, and unanticipated behaviors that automated tools simply cannot detect [2] [4] [8].

As such, the majority of a usability testing lead’s time—often 70–90%—should be dedicated to traditional, human-centered processes: planning studies, recruiting participants, moderating sessions, analyzing qualitative feedback, and synthesizing findings for the team [2] [6] [8].

However, collaboration with QA and developers—to ensure that usability issues, once discovered, are not only fixed but also prevented from regressing—remains important. This collaboration might take 10–30% of a usability lead’s attention, primarily for:

  • Communicating findings to QAs/devs
  • Helping translate usability issues into actionable, automatable checks (where feasible)
  • Participating in cross-functional review and prioritization sessions [1] [3] [7]

The exact split will depend on team maturity, product complexity, and how well-established automated regression testing already is.

Usability testing: when to human, when to machine

Human Usability Testing (Core Focus) Automated/Regression Testing (QA/Dev Focus)
Observes real users performing tasks Simulates user actions via scripts
Surfaces confusion, hesitation, and unexpected behaviors Detects functional breakages (e.g., button not working)
Identifies pain points in navigation, language, layout, and workflow Verifies expected flows work as coded
Captures qualitative feedback and emotional responses Captures pass/fail, performance, or accessibility metrics
Reveals issues with intuitiveness, information architecture, and user expectations Ensures code changes don’t break existing features
Involves follow-up questions and probing Lacks context for why a failure occurs

Usability testing leads should focus on surfacing and diagnosing human-centric issues—those that automation cannot predict or explain.

In contrast, automated E2E regression tests are best at catching technical regressions and verifying that previously fixed bugs do not reappear [6] [8].

Machine testing: when it’s usability, when it’s not

Automation can support usability leads in a few specific ways:

  • Regression-checking previously discovered usability bugs: Once a usability issue is fixed (e.g., a button made more visible), an automated test can ensure the fix persists, but only for the mechanical aspect (e.g., button is present and clickable), not the human aspect (e.g., button is actually noticed and understood by users).
  • Automated collection of usage analytics: Scripts can gather data on click paths, drop-off points, or time-on-task, which can inform where to focus human usability testing.
  • Automated accessibility testing: While not a substitute for human testing (especially for users with disabilities), tools can flag obvious accessibility violations.

However, automation cannot:

  • Detect if users misunderstand a label, feel frustrated, or fail to notice a feature due to poor visual hierarchy.
  • Replace the need for qualitative insight into why users struggle [2] [4] [8].

Usability leads should be concerned with automation only to the extent that it helps prevent regression of previously identified, fixable usability bugs or augments their understanding of user flows.

The broader suite of automated UX tests (performance, accessibility, E2E flows) remains the responsibility of QA and development, with usability leads providing input only where it directly intersects with observed user pain points[3][7].

Concern of Usability Lead Concern of QA/Dev Team
Ensuring previously fixed usability bugs do not regress (via automation, where possible) Ensuring all features work as intended after code changes
Using analytics to inform human testing focus Maintaining comprehensive automated regression and E2E test suites
Collaborating on test cases for user-reported issues Building and maintaining all automation infrastructure

In essence: Automation is a valuable support for usability testing, but cannot replace the core human processes that uncover and explain true usability problems.

The usability lead’s primary responsibility remains with the human side, but collaboration with QA and devs ensures that once a usability issue is discovered, it stays fixed [1] [3] [7] [8].


Note: A few additional links the LLM cited but didn’t inline:

[5] [9] [10] [11]

--- ---