Why Your Adoption Data Is Only Telling You Half the Story
Every little kid who has ever watched a magician pull a coin from behind their ear knows the feeling. The gasp. The grab at the nearest adult's arm. The ten minutes of trying to find coins behind everyone else's ears. Completely, joyfully sold. No need to understand how it worked. The not-knowing was actually the point.
Your early AI adoption data may be telling you the same kind of story.
Most AI transformation leaders are working with incomplete information. Not because they're lazy or unsophisticated, but because the tools available to them were built for a different era of workplace change, and they weren't designed to see beneath the surface of behavior.
Usage metrics tell you something real. Logins, activation rates, task completions matter. But they're signals about what people are doing, not about the psychological conditions that determine whether what they're doing will sustain, scale, or quietly fall apart six months from now.
A 2025 study in the Journal of Marketing illustrates exactly why this gap matters, and how strange and non-linear the picture gets when you actually look beneath the surface.
A Finding That Defies Simple Explanation
Tully, Longoni, and Appel set out to understand what predicts AI receptivity across individuals. What they found contradicted virtually everyone's intuition, including seasoned executives: people with lower AI literacy are more receptive to AI, not less. The pattern held across seven studies and 27 countries.
The mechanism they identified is genuinely interesting. Lower-literacy individuals are more likely to perceive AI as magical. When AI produces outputs that seem to require distinctly human qualities like creativity, judgment, and empathy people who don't understand how that's mechanically possible experience something closer to awe than skepticism. And that awe drives receptivity.
So far, counterintuitive but maybe manageable. But then it gets more complex.
These same lower-literacy individuals simultaneously hold more fear about AI's impact on humanity and rate AI as less capable than their higher-literacy counterparts. Yet they still prefer to use it more. High enthusiasm. High anxiety. Low understanding. High receptivity. All coexisting in the same person at the same time.
That is not a profile any standard adoption dashboard is designed to detect. It doesn't register as resistance. It doesn't register as disengagement. It shows up as early adoption, which most organizations would log as a win.
What This Actually Tells Us About Measurement
The point here is not that enthusiastic early adopters are a problem to be managed. The point is that a login metric cannot tell you why someone is using a tool, what psychological state they're in when they use it, or whether the conditions exist for that usage to become durable capability.
The Tully findings are one illustration of a broader truth about AI adoption in the workforce: the psychological experience of interacting with AI is non-linear, layered, and often contradictory. People can be drawn toward something they fear. They can adopt tools they don't trust. They can appear ready on every surface measure while carrying the exact conditions — cognitive overload, identity threat, insufficient organizational support — that predict quiet disengagement down the road.
None of that complexity is exotic. It is the normal human response to significant workplace disruption. But you can only act on it if you have instruments designed to surface it.
Engagement surveys were not built for this. Change management frameworks were not built for this. Training completion reports were not built for this. These are good tools. They just weren't designed to answer the question of whether someone is psychologically equipped to absorb a generational change in how work gets done.
Why the Stakes Are Higher Than They Appear
There is an equity dimension here that deserves more attention than it typically receives in AI transformation conversations.
The cross-country data in the Tully study showed that populations with lower AI literacy had higher AI receptivity at the national level. The researchers frame this primarily as a marketing insight. But consider the workforce implication. If the workers most eager to adopt AI are also the least equipped to evaluate its outputs, to identify errors, recognize bias, and know when a recommendation should not be trusted, that creates a compounding vulnerability over time.
Enthusiasm without literacy is not a neutral condition. For workers whose livelihoods depend on making sound decisions with AI-assisted information, it is a risk factor. Organizations and funders focused on equitable workforce development need measurement tools that can see this dynamic, not just count who completed the onboarding module.
The Deeper Question
All kids eventually figure out the coin trick. A slightly older kid at school might explain the mechanics and the magic evaporates overnight. That kid won’t be devastated (probably). But your workforce doesn't get to just move on when the awe wears off mid-deployment.
If your current data cannot distinguish between awe-driven adoption and capability-grounded adoption, you do not actually know where your workforce stands. You know what they are doing. That is a different and much narrower question.
The organizations that navigate AI transformation well are not necessarily the ones that move fastest. They are the ones that understand what is actually happening with their people before they find out the hard way that adoption metrics and transformation outcomes are not the same thing.
Dr. Wendy Rasmussen is the founder of Alpenglow Insights, where she helps organizations measure and address the psychological barriers to AI adoption. She holds a PhD in Psychological and Quantitative Foundations (Counseling Psychology) from the University of Iowa and is completing her Executive MBA at UC Berkeley Haas School of Business.
Reference: Tully, S. M., Longoni, C., & Appel, G. (2025). Lower artificial intelligence literacy predicts greater AI receptivity. Journal of Marketing, 89(5), 1–20.
Schedule a discovery conversation →
Disclosure: This article was written by Dr. Wendy Rasmussen with generative AI used as an editorial tool for grammar and clarity. All ideas, analysis, and conclusions are the author's own.