← Lab Notes

Billions Spent on AI. Nobody Is Measuring What It's Doing to Your People.

13 April 2026

Julie Hendry
Julie Hendry
CTO

The numbers are starting to come in and they don't say what the vendors promised.

A survey of over 6,000 C-suite executives by the Federal Reserve Bank of Atlanta found that nearly 90% of firms report no measurable impact from AI on employment or productivity over the past three years. Not a small impact. No impact. Meanwhile those same executives forecast AI will increase productivity by 1.4% over the next three years. The gap between what organisations believe AI will do and what it is actually doing is enormous.

Boston Consulting Group surveyed nearly 1,500 workers and found something more specific. Productivity increased when people used up to three AI tools. After that it dropped. More tools, less output. They called it "AI brain fry": the cognitive overload that comes from constantly switching between AI-powered systems, each with its own interface, logic, and demands on attention.

Researchers at UC Berkeley tracked a 200-person tech firm for eight months and found that AI didn't reduce work. It intensified it. People completed more tasks across a wider range. They took on more because AI made it easy to start things. But the result was more multitasking, more context-switching, and ultimately more burnout. The researchers published their findings in Harvard Business Review under a title that summarises the problem: "AI Doesn't Reduce Work. It Intensifies It."

ActivTrak's 2026 State of the Workplace report found that focus efficiency, the proportion of work time spent in uninterrupted concentration, has dropped to 60%. A three-year low. Directly correlated with the proliferation of AI tools in the workplace.

And ManpowerGroup's 2026 Global Talent Barometer found that while regular AI usage among workers jumped 13%, confidence in the technology fell 18%. People are using more AI and trusting it less.

The measurement problem

Every one of these studies measures something useful. Speed, output, adoption rates, self-reported productivity, time allocation. But none of them measures the thing that matters most: what is AI doing to how people think?

Are decision-making patterns changing? Is critical thinking being offloaded rather than augmented? Are collaboration dynamics shifting in ways that affect the quality of outcomes, not just the speed? Is the cognitive load of managing multiple AI tools degrading the capacity for deep, focused work?

These are not abstract questions. They are the questions that determine whether AI deployment is genuinely helping an organisation or quietly making it worse.

The reason nobody is answering them is that the instruments don't exist. The standard scales used to measure how people relate to technology were not designed for this. During my MSc research at the University of Strathclyde, I tested whether a widely used smartphone usage questionnaire predicted actual device behaviour measured by iPhone Screen Time. It didn't. The instrument was measuring something, but not what it claimed to. Self-report and reality were different things.

If the measurement tools we have can't reliably capture something as simple as how much someone uses their phone, they are not going to capture what AI is doing to cognitive patterns in a workplace.

What this means for organisations

The EU AI Act is already requiring impact assessments for high-risk AI systems. The UK is moving in the same direction. Regulators are going to start asking organisations to demonstrate that they have measured the effect of AI on their people. Not just data privacy. Cognitive impact. Decision quality. Skill development.

Most organisations have no framework for this. No baseline. No instruments. No methodology.

The ones that start measuring now will have evidence when the regulators come asking. The ones that wait will be scrambling.

What beò is building

beò exists because the gap between what organisations think AI is doing and what it is actually doing is too important to leave unmeasured. We are building rigorous cognitive impact assessment tools for organisations deploying AI. Not adoption surveys. Not productivity dashboards. Instruments that measure what is happening to how people think, decide, and collaborate when AI enters their working environment.

Our founding research established that standard measurement tools in this field are inadequate. That's not a criticism of the researchers who built them. It's a statement about the pace of change. The tools were designed for a different era. AI in the workplace requires new instruments built from the ground up.

The research coming out in 2026 confirms what we set out to address. Billions are being spent. The impact is unclear at best and harmful at worst. And nobody has the instruments to tell the difference.

That's the problem beò is solving.

Founding research cohort

We're selecting 3 to 5 organisations for our first AI workforce cognitive impact study. 30 minutes, no pitch, just a conversation about what you're seeing.

Book a conversation