Skip to main content
RepScale iconREPSCALE
Home/Blog/69% of Companies Use AI. 90% Report Zero Productivity Gains. Here's What's Missing.
·11 min read

69% of Companies Use AI. 90% Report Zero Productivity Gains. Here's What's Missing.

NBER studied thousands of executives. 69% of firms use AI. 90% say it's done nothing for productivity. The adoption numbers look great. The results don't. The gap has a name, and research shows exactly what closes it.

K — Founder, RepScale

K

Founder, RepScale · 20 years in B2B sales

The National Bureau of Economic Research studied thousands of executives across industries. The first number. 69% of firms now use AI. That sounds like progress. The second number. 90% of those firms report zero measurable productivity impact. That's Yotzov et al.'s finding, published by NBER in 2026. The adoption story looks great but the results story doesn't. The gap between those two numbers has a name. It explains why most sales teams aren't getting what they expected from AI.

69%of firms using AI. Yotzov et al., NBER 2026
90%report zero productivity impact. Yotzov et al., NBER 2026
14%productivity gain when AI has a framework. Brynjolfsson et al., NBER

What does the research actually say?

NBER isn't a vendor. It's the most respected economics research body in the United States. When NBER publishes a finding, it carries weight. What Yotzov et al. found in 2026 is blunt. Nearly seven out of ten firms have adopted AI. But nine out of ten of those firms see no measurable gain in productivity.

This isn't a small sample. The study covers thousands of firms across industries and sizes. It's not limited to one sector or one type of AI tool. The finding is consistent. Companies bought AI and deployed it, but the numbers haven't moved.

That's a hard finding for a lot of AI vendors. It's also hard for executives who approved six-figure AI budgets last year. But it shouldn't be surprising. We've seen this pattern before. PCs in the 1980s, the internet in the 1990s, cloud software in the 2010s. Adoption always outruns impact. The question is what separates the 10% who get results from the 90% who don't.

What's the "productivity J-curve" and why does it matter?

The gap between adoption and results has a name in economics. The productivity J-curve. It describes what happens when organizations adopt a new technology. Productivity doesn't rise immediately. It dips first.

The productivity J-curve. When companies adopt a new technology, productivity temporarily drops before eventually rising. The dip happens because the costs of adoption arrive immediately. Learning, process change, integration, mistakes. The benefits take months or years to show up. Most companies are stuck in the dip right now. They've absorbed the cost of AI adoption but haven't adapted their workflows enough to see the return.

The curve looks like the letter J. Productivity drops below the starting point during the adoption phase, then gradually climbs back and eventually exceeds where it started. Most companies are sitting in the bottom of that J right now. They bought the tools. They rolled them out. And they're measuring results against what they expected, not against where they actually are on the curve.

The J-curve is a process problem, not a technology problem. The dip happens because organizations adopt the tool without adapting the work. They bolt AI onto existing workflows instead of rebuilding workflows around what AI actually does well. The longer they stay in that mode, the longer they stay in the dip.

This matters for sales teams because sales workflows are complex. A rep's day involves research, writing, prep, admin, and actual selling. Dropping an AI tool into that mix without rethinking how those activities connect keeps companies stuck. That's the pattern that traps teams in the bottom of the J.

Why AI tools fail in sales specifically

Sales has a unique problem that most industries don't. The tasks AI handles in sales, like research, writing, and meeting prep, require domain expertise that generic AI doesn't have. Not just product knowledge, but knowledge of how selling actually works.

A research brief that lists facts about a company isn't useful to a rep. A brief that surfaces pain points, buying signals, and conversation hooks is the one reps use. It has to be structured around what matters for a specific sales conversation. The difference between those two outputs isn't more data. It's the AI knowing what "good" looks like in a sales context.

Same with email. An AI-written email that's grammatically correct and professionally worded is easy to produce. Any chatbot can do that. But an email that follows a proven cold outreach methodology? One that hits the right length, opens with a relevant observation, and closes with a low-friction ask? That requires a framework. Without one, you get output that looks polished but doesn't perform.

Workday found in 2026 that 37% of AI efficiency gains are lost to rework and correction. Think about what that means for a sales team. A rep gets an AI-drafted email, reads it, realizes it's off, and spends 15 minutes rewriting it. The AI saved 10 minutes of drafting time but added 15 minutes of editing time. Net result is negative. The rep is slower than if they'd just written it themselves.

That 37% rework number is the direct consequence of tools without a methodology built in. The output is plausible. It's not usable. And every minute spent fixing AI output is a minute that was supposed to be saved. I call this the correction tax. See the full breakdown in RepScale vs ChatGPT for sales.

Reps figure this out fast. Two or three bad outputs and they stop using the tool. The adoption dashboard still shows them as "active users" because they logged in this month. But they're not using it for real work anymore. The tool is shelfware with a healthy-looking usage metric.

What does a framework-first approach look like?

The best evidence for the framework approach comes from Brynjolfsson et al. at NBER. They studied AI deployment in customer service, a domain with measurable, comparable output. When AI had a structured framework defining what good performance looked like, productivity jumped 14% overall. For newer workers, the gain was 34%.

That 34% number for newer workers is especially telling. The framework gave less experienced workers access to the patterns that experienced workers had internalized over years. The AI wasn't making things up. It was applying a defined methodology consistently.

Without the framework, gains were negligible. Same AI model, same underlying technology. The only difference was whether the AI had a structured definition of what "good" looked like.

Applied to sales, a framework means specific things. The AI knows that a cold email should be 75 words, not 200. That it should open with a relevant observation about the prospect's business, not a self-introduction. That a research brief should surface pain points tied to evidence, not generic industry challenges. That meeting prep should connect to prior research and outreach history. The rep walks in with continuity, not a cold start.

It means the AI knows what a good follow-up looks like versus a bad one. That a breakup email has a different structure than a warm intro. That a LinkedIn connection request has a different tone than a cold email. These aren't things a general-purpose AI tool knows out of the box. They're things a sales-specific framework teaches it.

The methodology is the missing layer. It sits between "we bought an AI tool" and "our team is more productive." Skip it and you get the NBER result: adoption without measurable impact. Build it in and the Brynjolfsson finding applies: 14% productivity gains, with more for junior reps.

This is why AI account research built on a sales framework produces different output than asking ChatGPT to "research this company." The tool matters less than the methodology behind it.

How to close the gap on your team

If your team is in the 90% and you've adopted AI but aren't seeing results, here's how to start closing the gap. These aren't theoretical. They're the steps that separate teams getting 14% gains from teams getting nothing.

1. Audit where reps actually spend their time

Not where you think they spend it. Where they actually spend it. Track two weeks of real activity data across your team. Salesforce's State of Sales data shows reps spend only 28% of their time selling. Your team's number might be higher or lower. You need to know before you can fix it.

2. Identify the most manual, repetitive tasks

You're looking for tasks that are high-frequency, time-consuming, and similar enough in structure that AI can handle them well. Account research fits. First-draft email writing fits. Meeting prep fits. Custom proposal writing doesn't. It's too variable. Internal strategy work doesn't. It requires too much judgment. Start with the tasks where the input and output are well-defined.

3. Test AI on those tasks with a framework, not a prompt box

This is where most teams go wrong. They give reps access to a chatbot and say "use this for research." That's not a framework. A framework defines the structure the output should follow. It shows what good looks like and how to review the output. McKinsey found that teams who get this right see a 20 to 40% reduction in time on AI-addressable tasks. Teams who skip the framework step see much less.

4. Measure output quality, not adoption rate

Stop tracking how many reps logged in this month. Start tracking whether AI output is being used in actual prospect-facing communication without major editing. If reps are using the research briefs as-is in their call prep, that's signal. If they're rewriting every email the AI produces, that's signal too. Just not the kind you want.

5. Give it 90 days before judging ROI

The J-curve is real. The first 30 days are messy. Reps are learning the tool, figuring out what works, developing their review habits. The real signal shows up in months two and three. If you judge at day 14, you'll kill a tool that would have delivered results by day 60. Before committing budget, calculate the ROI so you know what success looks like and when to expect it.

If you're not sure where your team stands, start with an AI readiness assessment. It takes 15 minutes. It tells you whether your current setup is ready for AI or whether you've got gaps to fill first.

The companies that get this right

The 10% of companies that report actual productivity gains from AI share a few traits. None of them are about having better technology.

Clean data. Their CRM isn't a graveyard of stale records. Their prospect lists are current. Their deal stages mean something. AI built on bad data produces bad output. No framework can fix garbage inputs.

Defined process. They know what their sales workflow looks like before AI touches it. They've mapped which tasks take the most time and which have the most variation. They know which tasks have the clearest "good output" definition. They're not asking AI to figure out their process. They're asking AI to execute a process they've already defined.

Leadership reinforcement. The sales manager uses the tool. Reviews output in 1:1s. Gives feedback on what's working. When leadership treats AI as optional, reps treat it as optional. When leadership treats it as part of the workflow, adoption becomes real instead of performative.

Framework-first tools. They chose tools with a methodology built in. Not blank prompt boxes where each rep figures out the right instructions alone. The Brynjolfsson study proved this. The framework separates 14% gains from zero gains. The 10% who get results understood that before they bought anything.

That 14% isn't theoretical. It's measured. And the 34% gain for newer workers means AI with a framework is also a training tool. Junior reps producing output that looks like it came from a 10-year veteran isn't a stretch. It's what the data shows when the framework is in place.

The difference between AI that works and AI that doesn't isn't the model or the price. It's whether there's a sales methodology connecting the input to the output. That's it. The companies that get this right aren't using better AI. They're using AI better.

Frequently Asked Questions

Why isn't AI improving sales productivity?

NBER research shows 90% of firms using AI report zero productivity gains. The issue isn't the technology. It's how teams set it up. Most teams adopt AI tools without adapting their workflows or building a sales-specific framework. Without a methodology that tells the AI what "good" looks like, the output is generic and requires heavy editing. That rework erases the time savings. Workday found 37% of AI efficiency gains are lost to rework when there's no structured framework.

What is the productivity J-curve?

The productivity J-curve is an economics concept describing what happens when organizations adopt new technology. Productivity dips below the starting point before eventually rising above it. The dip occurs because the costs of adoption arrive immediately. Training, process change, integration time, early mistakes. The benefits take months or years to show up. Most companies using AI are in the dip right now. They've absorbed the costs but haven't adapted their processes enough to see the return.

How long does it take to see ROI from AI in sales?

For teams with a framework-first approach, clean data, and defined processes, time savings show up around month three. Quality improvements typically take six months to appear in pipeline metrics. Better reply rates, stronger first meetings, shorter sales cycles. Give any AI setup at least 90 days before judging. The first 30 days are adoption friction, not a reliable indicator of long-term value.

What's the difference between AI adoption and AI impact?

Adoption means your team is using AI tools. Impact means those tools are producing measurable improvements in productivity, output quality, or revenue. The NBER data shows these are very different things. 69% of firms have adopted AI. 90% report no measurable productivity impact. Tracking adoption rate tells you almost nothing about whether the tool is working. Logins, sessions, and queries don't measure value. Output quality and time-to-usable-result are better indicators.

Do sales teams need AI consulting to get results?

Not always, but the data suggests most teams need a structured framework to get results from AI. The Brynjolfsson NBER study showed AI with a framework produced 14% productivity gains. AI without one produced negligible results. Some teams can build that framework internally with the right experience and time. Others benefit from outside help to avoid the trial-and-error period that keeps teams stuck in the J-curve dip. The deciding factor is usually whether you have someone on the team who's done this before.

What percentage of companies actually benefit from AI?

Based on Yotzov et al.'s NBER research, roughly 10% of firms using AI report measurable productivity improvements. That's not because AI doesn't work. Most rollouts lack the process adaptation and framework that make AI effective. The companies in that 10% share common traits. Clean data, defined workflows, leadership reinforcement, and tools built on a methodology.

K — Founder, RepScale

K — Founder, RepScale

20 years in B2B sales carrying quota and closing deals with Fortune 500 companies. Based in Metro Atlanta. Built RepScale because nothing else was built with a real sales methodology behind it.

Try RepScale free

Account research, outreach, and meeting prep — connected workflow, ready to send.