👋 Hey, I'm Shehu AbdulGaniy. Welcome to SaaS SEO Insights, where every week I dive deep into SEO and AEO strategies that B2B SaaS startups are using to drive signups from organic and AI search, so you can steal what works and skip what doesn't.
474 of the keywords driving traffic to Zapier's listicles trigger an AI Overview. Those keywords account for 82% of Zapier's listicle traffic, over 880,000 monthly visits. And their click-through rate on AI Overview SERPs (8.2%) is higher than on SERPs without one (6.5%).
Listicles are supposed to be dying. Zapier's are compounding.
This is supposed to be the era when listicles are dying. AI Overviews are eating informational SERPs. Google's Helpful Content updates have wiped affiliate roundups off the first page. Even legacy publishers have lost half their traffic.
But Zapier keeps climbing.
So I pulled the data. I exported every "best of" listicle Zapier has indexed (URLs with "best-" in the slug), and the results were 328 pages, pulling a combined 1.08M monthly organic visits in the US alone. That's just listicles with "best" in the URL. There are plenty more without it.

One page, targeting the keyword “best wireframe tools”, brings in 245,357 visits per month, up 109,581 from the prior period.
Then I pulled the keyword-level data and went deep on the top 25 pages section by section. A clear pattern emerged. Here are the five things Zapier does that almost nobody else does, plus data showing why each one matters in the AI Overview era.
The pattern hiding in the data
78% of the keywords driving traffic to Zapier's listicles trigger an AI Overview.
Those AI Overview keywords account for 82% of all the traffic Zapier's listicles receive. Their median click-through rate on AI Overview SERPs (8.2%) beats their CTR on non-AI Overview SERPs (6.5%).
And of those 474 AI Overview keywords, 455 held their position versus the prior period. Only 9 lost rank.

The keywords most marketers are panicking about (the ones with AI Overviews) are the ones doing the heavy lifting for Zapier. Not just surviving, but compounding.
Zapier's listicles are designed to coexist with AI Overviews instead of being replaced by them. Here's what they're doing differently.
1. They tell you exactly how they tested, in painful detail.
Most listicles open with something like "we researched the best tools, so you don't have to." It's vague, and increasingly, it's not enough. Google's quality raters and AI Overview models both reward specificity that generic openers can't provide.
Zapier opens differently. Every listicle has a section that reads like a methodology disclosure on a research paper.
From their wireframe tools listicle:
"I researched over 40 of the top wireframe apps and tested the best ones: I went through the entire process of signing up for an account, designing a basic mobile app wireframe (login page and dashboard), testing the collaboration and export options."
From their screen recording listicle:
"This year, I evaluated and tested nearly 50 screen recording apps."
Here's why this matters in the AI Overview era specifically. AI Overviews compress generic information into the Overview itself. Anything Google's AI can paraphrase, it will, and the source loses the click. Vague methodology gets absorbed. But specific, hands-on methodology ("I tested 40 apps by signing up and building a login page wireframe") can't be summarized away. It's too granular, too first-person, too tied to lived experience. The reader has to click through to get it.
The point: vague methodology gets eaten by the Overview. The specific methodology survives and earns the click beneath it.
2. Their lists are embarrassingly short.
When you check the listicle from some SaaS companies, they might cover 11, 15, 22, or even 25 tools.
Zapier's wireframe listicle has 6 tools.
Their screen recording listicle? 7 tools.
Their note-taking listicle? 7 tools.
Their keyword research listicle? 4 tools.
The average across the listicles I analyzed sits at 6 to 8 picks. Comprehensive isn't winning anymore.
When you list 25 tools, each one gets two thin paragraphs and a feature dump. When you list 6 tools, each one gets 400 to 600 words of substantive review. The user gets actual help instead of a directory.
This matters for AI Overviews because of how Google's AI sources information. A 25-tool listicle reads to the AI as a directory: shallow per entry, easy to summarize into a bulleted "here are some options" Overview. A 6-tool listicle reads as an in-depth analysis: substantive per entry, harder to compress without losing the value. The Overview can't replace it because there's nothing to extract that does the original justice. The reader still has to click.
Recommending only 6 tools out of 40 you tested takes editorial conviction. That conviction is exactly what Google's quality raters are now trained to look for, and it's exactly what makes a result worth clicking when an AI Overview has already given the user a quick answer.
3. Every single tool gets a unique "best for" label.
This one is small, but it's load-bearing for the entire strategy.
In Zapier's wireframe listicle, each pick gets a distinct qualifier:
Figma for real-time collaboration
Moqups for beginners
Balsamiq for non-designers
UXPin for code-based design
Justinmind for interactive wireframes
Visily for AI-assisted wireframing
Six tools with completely different "best for" qualifiers, and no overlap.

This matters because the keyword "best wireframe tools" doesn't represent one search intent. It represents a dozen. Some readers want collaboration. Some want simplicity. Distinct qualifiers let Zapier capture every sub-intent inside one page, which is part of why that wireframe listicle ranks for 464 different keywords.
It's also why AI Overviews can't fully replace the page. The Overview answers "what's the best wireframe tool" with one or two general recommendations. But it can't answer "what's the best wireframe tool for non-designers" with the same authority. The sub-intent is too specific for a generalized summary. That's where Zapier's structure earns the click.
4. They tell you what each tool is bad at.
Real reviews include real drawbacks. Vendor-supplied content doesn't.
In Zapier's screen recording listicle, the Camtasia review includes this line:
"My one complaint about Camtasia is the pricing structure. There's no monthly option, and the entry-level starter plan is quite limited."

These aren't fake drawbacks like "the only downside is that it's too feature-rich." They're honest limitations from someone who actually used the product.
Including a real drawback per tool is among the strongest signals to Google's quality raters that a listicle wasn't written from press releases.
But it matters for AI Overviews for a different reason: Overviews are generated from vendor sites, product pages, and aggregated marketing copy, none of which include drawbacks.
So when a reader wants to know what's actually wrong with a tool (a question they'll always have before buying), the Overview can't help them. They have to leave it and click through. Zapier wins that click because it is one of the few sources where the answer exists.
5. They built for the AI Overview, not against it.
Here's the pattern I almost missed until I cross-referenced the data.
Look at how Zapier opens every listicle: a "What is X?" educational section, followed by a "What makes the best X?" criteria section, then a comparison table, and finally individual reviews.

That structure isn't just good for human readers. It appears engineered to generate AI Overviews.
From what we can observe, Google's AI extracts answers from the most semantically clean, well-structured sections it can find. Zapier's "What is X?" sections feed the Overview's definitional component. Their "What makes the best X?" sections feed the criteria component. Their comparison tables feed the at-a-glance recommendations.
When an AI Overview is built primarily from a single source, that source gets cited. And when it's cited in the Overview, it gets the click. That's likely a meaningful contributor to why Zapier's CTR on AI Overview SERPs is higher than on non-AI Overview SERPs: they're not just ranked underneath the Overview, they're inside it.

Most B2B SaaS companies structured their content for the old SERP. The work now is restructuring it for the new one.
"But isn't this just because they're Zapier?"
The obvious objection at this point is that Zapier is an industry leader with an authority score of 75.
It's a fair objection, and it's the first thing I tested when the data came in. Three things push back on it.
First, if domain authority alone explained Zapier's listicle performance, every category would perform similarly. Instead, their Productivity listicles collapsed 44% over the period I analyzed, while their Marketing/SEO listicles grew 62%. Same domain, same authority, but wildly different outcomes. The variable isn't the backlinks. It's something else, and the data points to the formula.
Second, the CTR gap (8.2% on AI Overview SERPs versus 6.5% on non-AI Overview SERPs) isn't a domain authority story. Domain authority influences ranking, not click-through behavior once you're already ranked. The CTR difference comes from the page being visibly better to the user than the alternatives, including the AI Overview itself.
Third, this isn't survivorship bias. I'm not just looking at the listicles that won. The Productivity collapse is in the same dataset. The formula has visible failure modes (which we'll get to in the next section) and the failures aren't random. They're concentrated in categories where the underlying search demand is migrating away from comparison content entirely.
Domain authority makes it easier for Zapier to rank. The formula is what makes the rank stick when AI Overviews show up.
Where the formula compounds, and where it dies
I categorized all 328 listicles by topic and looked at how each cohort moved versus the previous period:

Marketing and SEO listicles are exploding. Productivity is collapsing. AI is flat despite being the fastest-growing search category worldwide.
The pattern isn't about Zapier favoring some categories over others. It's about demand shape.
Their “best AI image generator” page lost 23,819 monthly visits. Their “best ai chatbot” page lost 4,594. Their “best AI writing generator” lost 1,306. These pages are well-written and follow the formula. They're losing because users no longer search for "best AI image generator." They open ChatGPT and ask it to generate the image directly.
Meanwhile, the AI listicles that are growing target categories where users still need a dedicated tool. “best AI grammar checker” gained 11,520 visits, “best AI video generator” gained 5,440, and “best AI app builder” gained 2,740.
The lesson is: the formula doesn't save you from category collapse. If the underlying search demand is migrating to native AI tools, no amount of testing transparency, or "best for" qualifiers will hold the page up.
The strategic implication: before you invest in a listicle, ask whether the category itself will still need one in 18 months. Listicles work when readers still need to compare specific tools. They don't work when readers can ask an AI to do the job directly.
What this means if Zapier is in your SERPs
If you're a SaaS startup writing listicles in a category where Zapier already ranks, here is what the data tells you to do (and not do).
Don't try to outrank Zapier on the head term. "Best wireframe tools" gets 1.83M searches a month. Zapier owns it. You won't dislodge them with another 7-tool listicle.
Do go after the sub-intents that Zapier didn't qualify for. Their wireframe listicle has 6 "best for" qualifiers. That leaves dozens of unqualified sub-intents on the table: best wireframe tool for solo founders, best wireframe tool for handoff to Figma developers, best wireframe tool for mobile-first design, and so on. Each of those is a low-volume, low-competition keyword that can rank with a focused page.
Do build categories Zapier hasn't covered yet. They have 328 "best of" listicles. They are missing thousands more, especially in vertical-specific categories (best CRM for solo financial advisors, best invoicing tool for design agencies). These aren't where Zapier competes. They're where you can.
Do use Zapier as the methodology benchmark, not the format benchmark. Don't copy their layout. Copy their rigor. Test the tools. Show the testing. Include drawbacks. State the criteria. The format is replicable. The discipline is what separates ranking pages from ignored ones.
The 12-point checklist for listicles that actually rank
Here's the full checklist I now use to evaluate any listicle, whether it's mine, a client's, or a competitor's:
Named expert author with a real bio establishing domain credibility (not "Staff Writer")
Editorial independence statement declaring the content is written by humans, not paid placement
Affiliate transparency disclosed clearly near the top
Explicit testing methodology naming how many tools you evaluated and what tasks you performed
Stated evaluation criteria (3 to 5 named criteria) listed before the reviews begin
Original screenshots showing your own usage, not vendor marketing images
Tight list size of 5 to 10 picks, with the number of tools evaluated stated openly
Unique "best for" qualifier on every single pick, no overlaps allowed
At least one honest drawback per tool, specific and real
Comparison table with tool, "best for," and pricing visible above the fold
Educational context section ("What is X?" and "What makes the best X?") before the reviews
Current pricing for every pick, plus a visible last-updated date and the current year in the title tag
That's the formula. Twelve elements. Most listicles in the wild fail on at least seven of them.
To recap
Zapier has 328 "best of" listicles pulling over a million monthly visits. 82% of that traffic comes from SERPs that have AI Overviews, and Zapier's CTR on those SERPs is higher than on SERPs without Overviews. The formula isn't surviving the AI Overview era. It's compounding through it.
Where it doesn't compound is in categories where the search demand itself is collapsing into native AI tools. The formula works. The category still needs it.
You don't need Zapier's domain authority to copy the formula. You need writers with real expertise, the discipline to test before recommending, and the strategic judgment to pick categories that will still need a comparison page in 18 months.
Next: a tool to score your own listicles
Which raises the obvious question: how do you know if your own listicles pass this checklist?
That's exactly what I'm building next. Very soon, I'll drop the Listicle Reviewer, a tool you paste any URL into. It scores your listicle against the 12 criteria above and more, flags what's missing, and tells you what to fix first. Built straight from this analysis.
Watch out for it.
And if you want help going deeper than listicles
The structural principles that make Zapier's listicles win on AI Overview SERPs (clean sections, criteria-led organization, definitional clarity) are closely related to the principles that make pages get cited inside ChatGPT, Claude, and Perplexity. Different surfaces, overlapping mechanics.
We call the broader problem the AI search gap, and it's where most B2B SaaS companies are silently losing pipeline.
If you want to see exactly where your product is invisible in AI search (and your top competitors are showing up instead), my team built the AI Search Gap Analysis tool. We map the prompts your buyers are running, score your visibility versus competitors, and hand you a 30-day action plan.
Hope you found this helpful.
Got any thoughts on this?
Let me know by replying to this email.
To your startup success,
Shehu AbdulGaniy
Founder, Your Content Mart
Want to hire me? I help B2B SaaS companies drive user signups and paying customers from organic search (and now AI search). Companies I've worked with include Copysmith, OneCal, and SweetProcess. Click here to set up an intro call.
P.S. If this teardown was useful, forward it to one person still publishing 25-tool listicles. They need to see the data.

