A typical “power user” of artificial intelligence is likely to be a more seasoned employee, likely at the executive level, employing the tools for more strategic tasks, versus personal productivity aids. Plus, leaders and advocates need to look beyond simple usage metrics to determine the success of their AI deployments, a new study asserts.
In my last post, I explored a real-world example of what’s involved in building a power team ready to successfully lead AI initiatives. The next step is to identify and bring along the power users who will demonstrate the capabilities of AI. But where are they?
“Leaders often struggle to assess whether their employees are using AI successfully. Often, they default to measuring what’s easily observable: how much they’re using AI tools, states Jaime Schmidt, professor at The University of Texas at Austin, and a team of co-authors in a study published in Harvard Business Review.
The study involved an observation of 2,500 employees using AI over an eight-month period, analyzing more than 1.4 million prompts. The team, from University of Texas and KMPG, looked for evidence of “sophisticated use” of AI: “specific prompting strategies, clear and ambitious requests, and a level of comfort with the tools.”
Fittingly, the researchers used AI (ChatGPT model o1) to help analyze the data.
Only about 5% of the employees studied could be considered “top users,” the researchers estimated. Those individuals tended to be “ambitious with how they approached AI, treated it as a reasoning partner, delegated complex tasks with clear objectives, and treated AI as a general cognitive tool rather than a mere productivity tool,” they found.
The researchers expressed surprise that many of these top users were those above manager level. Plus, adoption tended to be uneven, underscoring how “driving meaningful, value‑creating use requires more deliberate intervention than simply making tools available to employees.”
The higher-level executives tended to use LLMs “for a greater diversity of tasks, like technical guidance and ideation. This suggests that experience and role context shape not just how often AI is used, but how it is integrated into core work.” Staff-level employees, on the other hand, employed AI tools for personal tasks.
The Prompt: Get the week’s biggest AI news on the buzziest companies and boldest breakthroughs, in your inbox.
Their conclusion: “frequency of use may not be a reliable signal for productivity.
Employees in roles below manager are less likely to use a deliberate strategy when engaging with the LLM, whereas manager and above employees are more likely to do so.”
How can an organization encourage or enable more people to aspire to be AI power users, to exceed that 5% threshold that Schmidt and her team found? Prasad Setty, former Google VP of people operations and Project Aristotle leader, along with Jennifer Carpenter, global head of talent at Analog Devices, discussed their analysis of power users, and how to bring about more in a recent panel discussion.
“You don’t just start with adoption as a metric,” said Setty, echoing the findings from Schmidt’s study. “It is important but it’s a limited metric. We wanted to look at quite a few different variables that would go into what good usage looks like. Certainly the volume and the frequency of how people are using it. but also we wanted to look at the breadth of the conversations and make sure that people were thinking about it for different things that they might come out with.”
Such higher-level applications include goal setting, feedback, and onboarding, Setty illustrated. In addition, look at “the quality of the conversations itself, because AI is so capable of giving you answers. It is very easy to get an answer for any question that you want to ask.” And some of them are very much about, ‘well should I do X or Y.’ Others are much more in-depth, they go into the depth of how should I think about ‘solving this friction and here are the risks that I see. ’Here are the second-order effects that I see if I go down this approach.”
A challenge with promoting AI power usage is a bifurcation risk: “AI haves versus have-nots,” Carpenter said. To promote greater power usage, employ “invites, not edicts,” Carpenter said. “Invite people to lean in. Invite people to try something. Don’t force it.” She related an experience two years ago with a rollout of GitHub Copilot. “I thought everyone was going to throw a parade when we gave the software developers GitHub Copilot. But they weren’t so thrilled initially of being mandated or kind of put on a force march.”
Enjoyed this article? Sign up for our newsletter to receive regular insights and stay connected.

