Cypress Visual Testing: The Complete Guide to Adding Visual Testing
Visual testing is an automated verification method that compares screenshots of a web interface against reference images to detect any visual regression — a misaligned button, a changed font, an element overlapping another.
Cypress is a fantastic tool. We love it for its speed, its intuitive API, its massive community. But let's be upfront: Cypress does not do visual testing natively. Unlike Playwright which includes toHaveScreenshot() out of the box, Cypress forces you to tinker with third-party plugins to get any kind of screenshot comparison.
And that's a more serious problem than it appears.
This guide reviews the existing approaches to adding visual testing to Cypress, their real limitations, and why a radically different approach deserves your attention.
Why Cypress Has No Built-in Visual Testing
This is the question nobody asks loudly enough. Playwright did it. Why not Cypress?
The official answer is diplomatic: Cypress focuses on functional testing. The honest answer is more nuanced. Visual testing is a complex problem — baseline management, tolerance for rendering differences, anti-aliasing, animations — and Cypress chose to let its plugin ecosystem handle it.
The result? Fragmentation. Each plugin does things its own way, with its own conventions, its own bugs, and its own risk of abandonment. You're not choosing one officially supported solution. You're betting on an open source project maintained by a handful of contributors — or on a paid cloud service.
For teams that have built their entire test suite on Cypress, it's a real dilemma. Migrating to Playwright just for visual testing isn't realistic. Adding a fragile plugin isn't either. Let's see what's available.
Approach 1: cypress-image-snapshot
The most popular plugin in the ecosystem. It relies on jest-image-snapshot (itself based on pixelmatch) to compare screenshots pixel by pixel.
How It Works
Installation requires a few tweaks to Cypress configuration — nothing insurmountable, but enough files to touch that mistakes can easily slip in. If you want the exact commands, ask your favorite AI assistant to generate the config — it loves that kind of thing, and it'll keep it busy between haikus about machine learning.
Once in place, the concept is simple: you add a custom command to your tests that captures the screen and compares it to a reference image stored in your repo. The first run creates the baseline. Subsequent runs detect differences.
The Limitations Nobody Tells You About
The maintenance problem. This plugin has gone through extended periods of inactivity. When you're reading this, check the date of the last commit on GitHub. If it's more than six months old, ask yourself some hard questions.
False positives. Pixel-by-pixel comparison is brutal. A slightly different font rendering between your machine and CI? False positive. Anti-aliasing that varies by GPU? False positive. You spend more time calibrating tolerance thresholds than writing tests.
No review interface. When a test fails, you get a diff image in a folder. No dashboard, no approval workflow. You open the image in your file explorer and squint to find the difference. It's artisanal at best.
Baseline management in Git. Hundreds of PNG images in your repo. Merge conflicts on binary files are a nightmare. Git history bloats. Some teams end up using Git LFS, adding yet another layer of complexity.
Approach 2: Percy (BrowserStack)
Percy is a cloud visual testing service that integrates with Cypress via an SDK. The approach is fundamentally different: instead of comparing locally, Percy sends the DOM and assets to its servers, renders the page in real browsers, and compares the results in a web dashboard.
How It Works
You install the Percy SDK for Cypress, add a call in your tests to capture a snapshot, and Percy handles the rest in the cloud. The review workflow happens in Percy's web interface: you see the differences, you approve or reject.
For the exact configuration, your desktop AI can spit out the snippet in three seconds — it's its moment to shine, let it rather than copy-pasting from docs that'll be outdated in six months.
The Limitations
Cost. Percy is a paid service. A free plan exists but it's limited in monthly snapshots. For a team that tests seriously, costs add up fast. We won't detail pricing here — it changes regularly — but expect a significant budget line item.
Cloud dependency. Your snapshots are rendered on Percy's servers. If the service goes down, your tests fail. If BrowserStack (which acquired Percy) decides to change pricing or terms, you have no leverage.
CI latency. Sending the DOM to an external service, waiting for rendering, retrieving the result — it adds time to your pipeline. Not dramatic for ten tests, but for five hundred, you'll feel it.
Vendor lock-in. Once all your baselines are in Percy, migrating elsewhere means recreating everything from scratch. It's the classic proprietary cloud service trap.
Approach 3: Happo
Happo is a lesser-known alternative to Percy, with a similar positioning: a cloud service that captures and compares screenshots of your components.
The Cypress integration exists but it's less mature than Percy's. The product is solid, the team is serious, but the user base is smaller. Less community documentation, fewer Stack Overflow answers, fewer experience reports.
The same cost and cloud dependency issues apply.
Approach 4: Applitools Eyes
Applitools takes the concept further with its AI-based comparison technology (their "Visual AI"). Instead of pixel-by-pixel comparison, the algorithm tries to detect "visually significant" differences while ignoring minor rendering variations.
It's appealing on paper. In practice, the product is powerful but complex. Configuration isn't trivial, pricing is opaque, and dependence on a proprietary service is total. For a detailed analysis, check our Applitools review.
The Root Problem: Visual Testing as an Add-on
All these approaches share a structural flaw: they treat visual testing as an add-on to functional testing.
You have your Cypress suite. You graft on a plugin or SDK. You add calls to your existing tests. Visual testing becomes a parasite on functional testing — dependent on the same infrastructure, the same selectors, the same code.
When Cypress ships a major version update, your visual testing plugin breaks. When your functional test changes its path, your visual baseline becomes obsolete. When a developer modifies a selector, both the functional test AND the visual test fail.
It's a fragile model by design.
Visual testing deserves to be a first-class citizen, not a stowaway in your functional suite. It answers a different question ("does it look right?" vs "does it work?") and should have its own tools, its own workflows, its own baselines.
What Your Cypress Tests Don't See
A Cypress test verifies that the button exists and triggers the right action. It doesn't verify that the button is visible, properly aligned, the right color, with the right padding, across all breakpoints.
Visual bugs are sneaky because they pass all functional tests. The form works perfectly — but the label overlaps the input on mobile. The dropdown menu opens correctly — but it appears behind another element. The displayed price is correct — but the currency shows white on white background.
These bugs reach production because nobody looks for them systematically. And they're expensive: in credibility, in conversions, in support tickets. To understand what visual testing actually detects, concrete examples often speak louder than theory.
The Alternative: Separating Visual Testing from Code
What if visual testing didn't need Cypress at all?
That's the position we defend at Delta-QA, and we fully own it. Visual testing doesn't need code. It doesn't need plugins. It doesn't need CSS selectors or webpack configuration.
Delta-QA works differently. You browse your site, record a journey with point-and-click, and the tool captures reference screenshots. On each subsequent run, it compares and shows you the differences in a dedicated interface. No code. No plugin. No dependency on Cypress, Playwright, or anything else.
This isn't a compromise. It's a different philosophy. Functional testing and visual testing are two distinct disciplines that each deserve their own tools. Your Cypress suite continues verifying that everything works. Delta-QA verifies that everything looks right. They complement each other without stepping on each other's toes.
For QA teams that don't code, it's liberating. For developers, it's time saved. For everyone, it's fewer false positives and a review workflow that makes sense. Discover why no-code visual testing is changing the game.
FAQ
Can Cypress do visual testing without a plugin?
No. Cypress can take screenshots with cy.screenshot(), but it offers no native comparison mechanism. You get images, but comparing them against baselines, managing tolerance thresholds, and the approval workflow must be added via a plugin or external service.
What's the best Cypress plugin for visual testing?
There's no universal answer. cypress-image-snapshot is the most popular open source option but suffers from maintenance issues and false positives. Percy offers the best user experience but it's a paid service. The "best" depends on your budget, your tolerance for false positives, and your willingness to maintain extra code.
Is Percy free with Cypress?
Percy offers a free plan with a limited number of monthly snapshots. For serious professional use, you'll need a paid plan. Pricing changes regularly — check their site for current terms.
Can you do Cypress visual testing in CI/CD?
Yes, all described approaches work in CI/CD. But that's where problems multiply: rendering differences between environments, baseline management, execution time. CI amplifies every fragility in your visual testing setup.
Why not just use Playwright for visual testing?
If you're starting a new project, Playwright with its native toHaveScreenshot() is indeed a better choice for code-based visual testing. But if you already have a substantial Cypress suite, migrating isn't realistic. And even with Playwright, you remain in the code-based visual testing paradigm, with its maintenance and accessibility limitations.
Can Delta-QA replace Cypress tests?
No, and that's not the goal. Cypress excels at functional testing: verifying that interactions work, that APIs respond correctly, that business logic is respected. Delta-QA focuses on visual testing: verifying that the interface looks right. The two tools are complementary, not competitors.
How long does it take to set up visual testing in Cypress?
With cypress-image-snapshot, expect one to two hours for basic installation and configuration, then several days to calibrate tolerance thresholds and stabilize tests against false positives. With Percy, technical setup is faster but organizational setup (snapshot management, review workflow, CI integration) takes time. With Delta-QA, the first visual test is up and running in minutes.
Conclusion
Cypress is an excellent functional testing tool. We use it, we recommend it for what it does well. But pretending it handles visual testing satisfactorily is self-deception.
Plugins exist. They work, more or less. But they're fragile, poorly maintained for some, expensive for others, and they all add complexity where none is needed.
Visual testing deserves better than a plugin. It deserves a dedicated tool, built for this specific problem, accessible to the entire team — developers and non-technical QA alike.