Why Your Most Enthusiastic AI Adopters Might Be Your Biggest Risk

Here's a finding that should make every AI transformation leader pause.

A 2025 study published in the Journal of Marketing by Tully, Longoni, and Appel found that people with lower AI literacy are actually more receptive to AI — not less. Across seven studies and 27 countries, the pattern held: the less someone understands about how AI works, the more eager they are to use it.

At first glance, that sounds like good news. Your workforce is more enthusiastic than you expected. Adoption numbers look strong in the early weeks. Leadership is optimistic.

But as a psychologist who studies AI readiness in organizations, this finding tells me something different. It tells me that enthusiasm and readiness are not the same thing — and confusing the two is one of the most expensive mistakes leaders make during AI transformation.

The "Magic" Problem

The researchers identified the mechanism behind this paradox: people with lower AI literacy perceive AI as magical. They experience a sense of awe when AI performs tasks that seem to require distinctly human capabilities — writing, creating, empathizing, advising. Without understanding how large language models or pattern-matching algorithms actually work, the outputs feel extraordinary. Almost supernatural.

That sense of wonder drives initial adoption. People are drawn to tools that feel powerful and mysterious.

But awe is not a foundation for sustained, productive use. It's a feeling — and feelings shift. When the novelty fades, when the AI produces a confidently wrong answer, when a workflow change requires real cognitive effort, awe doesn't carry someone through. Self-efficacy does. Perceived usefulness does. Managerial support and protected learning time do.

This is precisely the distinction that most organizations fail to measure.

What This Means for AI Transformation Leaders

If you're leading an AI rollout and tracking adoption through usage metrics — logins, tool activations, completion rates — you may be looking at a dashboard that reflects curiosity, not capability. The Tully et al. research suggests that the people driving your early adoption numbers may be the same people who understand AI the least, who are most vulnerable to disappointment when the "magic" wears off, and who lack the foundational knowledge to use these tools effectively over time.

The researchers also found that people with lower AI literacy simultaneously hold more fear about AI's impact on humanity and perceive AI as less capable — yet still want to use it more. That's a complex psychological profile: high enthusiasm, high anxiety, low understanding. It's the profile of someone who is likely to adopt fast, plateau faster, and quietly disengage when things get hard.

Any leader who has watched promising pilot metrics collapse at scale knows this pattern.

The Measurement Problem

Here's the challenge: none of the standard tools in a transformation leader's toolkit are designed to detect this. Employee engagement surveys ask whether people are satisfied — not whether they're psychologically equipped to adopt a new way of working. Change management frameworks address resistance after deployment, not the upstream conditions that predict whether adoption will sustain. Training programs assume that if someone lacks a skill, teaching them the skill solves the problem. But if the barrier is identity threat, cognitive overload, or a fundamental mismatch between someone's perception of AI and its reality, more training doesn't help.

What you actually need is a diagnostic that distinguishes between enthusiasm and readiness — one that measures the psychological and organizational conditions that determine whether someone can and will adopt AI productively, not just whether they're willing to try it.

The Equity Dimension

There's a second implication of this research that matters enormously for anyone thinking about AI and the future of work.

The study's cross-country data showed that nations with lower AI literacy had higher AI receptivity. The paper frames this as a marketing insight. But the workforce equity implication is significant: if the populations most receptive to AI are also the least equipped to evaluate its outputs — to catch errors, identify bias, recognize when a recommendation is inappropriate — that creates a vulnerability pattern that compounds over time.

Workers who adopt AI enthusiastically but without literacy are at greater risk of over-reliance, poor decision-making with AI-assisted outputs, and eventual disillusionment. For organizations and funders focused on equitable workforce development, this isn't a theoretical concern. It's a measurable one — if you have the right instrument.

The Bottom Line

Enthusiasm about AI is not the same as readiness for AI. The research is now clear on this. And yet most organizations continue to treat adoption metrics as proof that transformation is working.

The organizations that will succeed at AI transformation are the ones that measure what's actually happening beneath the surface — the confidence, the motivation, the cognitive capacity, the organizational support structures — before they scale. Not after the plateau has already arrived.

That's the diagnostic work I do at Alpenglow Insights. If you're an AI transformation leader or consultant who wants early visibility into where adoption will break down, I'd welcome a conversation.

Dr. Wendy Rasmussen is the founder of Alpenglow Insights, where she helps organizations measure and address the psychological barriers to AI adoption. She holds a PhD in Psychological and Quantitative Foundations (Counseling Psychology) from the University of Iowa and is completing her Executive MBA at UC Berkeley Haas School of Business.

Reference: Tully, S. M., Longoni, C., & Appel, G. (2025). Lower artificial intelligence literacy predicts greater AI receptivity. Journal of Marketing, 89(5), 1–20.

Schedule a discovery conversation →

Disclosure: This article was written by Dr. Wendy Rasmussen with generative AI used as an editorial tool for grammar and clarity. All ideas, analysis, and conclusions are the author's own.

Next
Next

The Diagnostic Gap: Why Engagement Surveys Can't Tell You Who's Ready for AI