If you've ever owned customer marketing at a small SaaS company, you know the math.
You have a list of customers you've been meaning to interview for a year. You have three half-finished case study drafts waiting on the customer's legal team. You have a product manager forwarding you a Slack screenshot of a great quote from a champion you've never spoken to. You have a webinar to promote, a deck to update, a sales enablement asset that's six quarters old, and someone on the exec team asking why there isn't a logo wall on the homepage yet.
Somewhere in all of that, you're supposed to be talking to customers. That's the job that actually moves the needle. And it's the first thing that gets pushed when everything else is on fire.
This is the reality for almost every company under $20M ARR. Maybe you have a dedicated product marketer, maybe you don't. Maybe a PM is doing customer advocacy on the side. Maybe the founder is still the one who runs the interviews because nobody else has the relationship. The Product Marketing Alliance's 2026 State of Product Marketing report shows the scope creep clearly — customer onboarding responsibilities for PMMs nearly doubled since 2024, while content marketing has started to slip as AI absorbs the lower-value execution work. The job is broadening even as the headcount isn't.
Either way, the bottleneck isn't that the interviews are hard — it's that everything around the interviews has compounded into a process so heavy that the only way to justify a customer conversation is to be sure you're going to get a hero asset out of it.
So you wait. You wait for the customer to hit a number worth quoting. You wait for legal and brand and the customer's CMO to bless every comma. You wait for the moment that justifies the production cost. And then you publish a polished trophy that gets passed to a champion, gets skimmed once, and disappears into the resource center.
That whole posture — wait until the story is perfect — is timing the S&P with customer proof. And it loses for the same reason market timing loses: the cost of waiting is invisible, but it's huge. Every month you don't talk to a customer is a month of language, objections, and use-case detail you'll never recover. Every "we'll do it when the numbers are bigger" is a Part 1 you'll never publish, which means there's no Part 2, which means a year from now your sales team is still running on the same two case studies from 2023.
The “bad case study” is the right case study
A B2B marketer I follow, Collin Mayjack at Sybill, posted last week about launching what he called a "bad case study." No hard numbers. No big revenue claims. Instead: matcha lines, bull riding, and a real human walking through how she actually uses the product day-to-day. (You can read the actual case study here — it's about Tofu CRO Elaine Zelby, and it does exactly what Collin says it does.) He flagged it openly: should they go back and do a Part 2 with hard numbers? Probably. But in the meantime, he's happy to have it out there.
That instinct is the right one, and I want to extend it. The lower you can drive the cost of producing customer stories — interview time, approvals, editing, design — the more "bad" case studies you can put out, and the more compounding ground you make.
Three things happen when you stop treating each case study as a launch event:
You can publish a Part 1 without knowing what Part 2 looks like. Most companies refuse to commit anything to public until they have a full hero arc. But the most honest customer stories aren't arcs — they're checkpoints. Here's where they were six months in. Here's where they were a year in. Here's what changed when we shipped the new module. That's a story prospects can actually project themselves into, because real adoption looks like that, not like a Sundance trailer.
You can capture perspectives that don't fit the highlight reel. The CFO who liked the procurement experience. The end user who hated it for the first month and then changed their mind. The implementation lead who'll tell you exactly what broke. None of those make it into the trophy version, because the trophy version is calibrated to one persona at the top of the funnel. But ask any prospect what they actually want to know before signing, and it's never just the ROI headline. It's what does my next six months actually look like.
You stop optimizing for the wrong reader. A polished case study is built for the champion who's already convinced. A messier, more human story is built for the skeptic who needs to see proof in their own language. Those are different people, and the second one is who actually decides.
The bigger shift: case studies are becoming a data layer, not an artifact
Here's the part I think gets missed when people argue about case study quality.
Static case studies — even good ones — were a workaround for a constraint that no longer exists. The constraint was that you couldn't reasonably custom-tailor proof for every prospect, so you had to bet on one polished version and hope it landed close enough to what the buyer needed. That's why the form looks the way it does: hero customer, hero metric, hero quote, all chosen to be defensible across the widest possible audience.
But buyers don't research that way anymore. They research the way they research everything else now — by asking an AI a specific question and expecting a contextualized answer. G2's Answer Economy report, published in April 2026, found that 51% of B2B software buyers now start their research with an AI chatbot more often than with Google — up from 29% just eleven months earlier. Sixty-nine percent chose a different vendor than they originally planned because of what an AI surfaced, and a third bought from a vendor they'd never heard of. The buyer is no longer skimming your case study at the start of the journey. They're asking an AI a question, and the AI is deciding whether you're the answer.
What this looks like in practice
Here's an example from a recent session of mine. I asked an AI assistant to evaluate Gong.io for Proofmap — an AI-native company with a small sales team and a SaaS-plus-services model — and pull case studies from Gong's customers that would actually map to my situation. The output wasn't a polished marketing artifact. It was a contextualized fit analysis built from Gong's public proof.
An AI assistant doing what a human buyer used to do: synthesizing what a vendor does, pulling the case study that matches the buyer's profile, mapping the value to the buyer's specific situation, and offering a blunt verdict. The buyer never opened Gong's case study page. The AI did, on their behalf.
Notice what just happened. The AI didn't reproduce Gong's case study. It consumed Gong's case studies as raw material and translated them into a contextualized answer to the question I actually had: is this right for my business? It pulled the Demandbase case study specifically because Demandbase is closer to my GTM motion. It told me where Gong would help and where it wouldn't. It even offered to build a "should we buy now" decision model based on my exact revenue and team size.
This is the world buyers are already operating in. A senior IT buyer evaluating your platform doesn't want your case study. They want the answer to the specific question they have, pulled from the actual experience of customers who look like them. How did mid-market manufacturing companies handle the rollout? Which integrations broke for teams under 50 engineers? What did the procurement process actually look like for a regulated buyer?
If your customer proof lives as a static PDF, you can't answer those questions. If it lives as structured, on-record interview data, you can answer almost any version of them — on demand, in the voice of the customer who actually said it.
The Gartner analogy
There's a useful parallel happening right now in the analyst space. Companies like Olive are building AI-native alternatives to the Gartner / Forrester research model. The old model: pay six or seven figures for a Magic Quadrant or a Wave that one team of analysts decided was the right shape of the market for everyone. The new model: enterprise IT buyers and their consultants generate their own evaluation, scoped to their own requirements, on demand.
Gartner is the clearest case in point. As of May 2026, Gartner (NYSE: IT) is trading at ~$149 — down 63% year-over-year from ~$404. The market is pricing in what enterprise IT buyers are already discovering on their own: when buyers can generate their own evaluation on demand, they pay less for someone else's.
Industry observers are calling this shift directly. A February 2026 piece on how AI is disrupting the analyst industry made the case that the old analyst model worked because information was scarce — and now it isn't. AI can synthesize markets, compare platforms, and model TCO in seconds. What buyers actually want from a research artifact has shifted from "give me the standard view" to "give me the answer to my specific question."
The shift isn't AI made the analysts faster. The shift is that buyers refuse to accept someone else's pre-baked research as the answer to their specific question, when they can produce a more relevant version themselves.
The same shift is coming for customer case studies. The polished trophy is the Magic Quadrant of customer proof — useful as a category artifact, but increasingly the wrong shape for how decisions actually get made. What replaces it isn't no case studies. It's case studies that show up as the answer to the specific question a buyer was already asking.
What this means for how you invest
If case studies are becoming a data layer, the strategic move isn't to make each one more polished. It's to make each one more capturable.
Stop optimizing for one perfect interview that yields one perfect 800-word PDF. Start optimizing for a steady cadence of conversations with customers across roles, stages, and contexts — captured cleanly, structured well, and reusable across whatever shape the eventual output needs to take. A page on the site. A first-meeting talking point a rep can pull into a deal in 30 seconds. A direct answer when a prospect's AI agent asks how the product performed for someone like them.
The companies that win the next decade of customer proof won't be the ones with the prettiest case studies. They'll be the ones who treated every customer conversation as a data point worth capturing, and built the infrastructure to deploy that proof into whatever context their buyer is in.
This is the bet we're making with Proofmap — and it's also why I think Collin's "bad case study" instinct is exactly right. Ship the checkpoint. Capture the perspective. Treat the interview as the asset, not the article. The Part 2 you're waiting for is going to look very different from the Part 1 you're sitting on.
In the meantime, the company that publishes is the company prospects can find.

