Who Gets Left Behind When AI Moves Fast? The Case for Standardized Readiness Measurement

The conversation about AI and the future of work has largely been shaped by two narratives. The first is optimistic: AI will augment human capabilities, create new categories of work, and drive productivity gains that benefit everyone. The second is cautionary: AI will displace workers, concentrate economic value, and deepen existing inequities.

Both narratives tend to treat "workers" as a monolith. But the reality on the ground is far more nuanced — and the differences between who thrives in an AI-transformed workplace and who falls behind are not primarily about technical skill. They're about psychology, organizational context, and the conditions that determine whether someone can absorb and sustain a fundamentally new way of working.

We don't yet have a standardized way to measure those conditions across organizations. And without measurement, we can't see the inequities forming until they've already compounded.

The Readiness Divide Is Already Here

Research is beginning to surface a pattern that should concern anyone invested in equitable workforce outcomes. A 2025 study in the Journal of Marketing found that people with lower AI literacy are paradoxically more receptive to AI — driven not by understanding but by a sense of awe at what feels like "magical" technology. At the same time, these individuals hold greater fear about AI's societal impact and perceive AI as less capable.

That paradox — high enthusiasm, low literacy, high anxiety — describes a vulnerability profile, not a readiness profile. And it maps disproportionately onto the workers who are already at the margins of organizational power and resources.

Separately, emerging research shows that women use generative AI at significantly lower rates than men — not because of disinterest, but because of trust gaps and, critically, competence perception penalties. Women who do use AI tools in professional contexts risk being perceived as less competent by their peers and managers. When the penalty for adopting is reputational damage, non-adoption becomes a rational strategy — one that widens the capability gap over time.

These are not problems that more training will solve. They are psychological and systemic barriers that require different measurement and different intervention.

The Factors That Actually Predict Adoption

The research on technology adoption, organizational change, and behavioral readiness converges on a set of conditions that matter far more than technical skill when it comes to whether someone will adopt AI productively:

Psychological readiness: the degree to which someone feels threatened by AI — in their competence, their professional identity, their sense of value at work — and the degree to which they believe they can learn and succeed.

Perceived usefulness: whether someone believes AI is relevant to their actual work, not as an abstract efficiency promise but as a concrete tool that connects to outcomes they care about.

Cognitive capacity: whether someone has the bandwidth — the time, the mental space, the freedom from competing demands — to absorb a fundamentally new way of working.

Organizational enablement: whether the system around the individual supports adoption — managerial capability, psychological safety, clear expectations, protected learning time, and visible leadership commitment.

These are the factors that predict sustained adoption. They are also the factors most likely to vary along the same lines as existing workforce inequities: by role, by seniority, by department, by access to organizational resources, and by the degree to which workers feel psychologically safe experimenting with new tools.

If we want to understand who is being left behind — and intervene before the gap widens — we need to measure these conditions directly.

The Case for a Standardized Field Measure

Right now, most organizations assessing their AI readiness rely on one of two approaches: enterprise engagement surveys that weren't designed for this purpose, or bespoke consulting assessments that produce insights for a single organization but can't be compared across contexts.

Neither approach serves the needs of a field trying to understand systemic patterns. We lack a common language and a common instrument for measuring the conditions that predict AI adoption success — the psychological, motivational, cognitive, and organizational factors that determine whether a workforce can absorb AI-enabled change.

A standardized readiness diagnostic — one that could be deployed across the portfolio organizations of a major funder, across sectors, across populations — would make several things possible that are not possible today.

It would enable benchmarking: understanding how readiness varies across organizations, sectors, and worker populations, and identifying where the greatest risk concentrations exist. It would enable targeting: directing intervention resources not just at organizations with the lowest technical skills, but at organizations where psychological barriers, cognitive overload, or organizational enablement failures are most likely to derail adoption. It would enable longitudinal tracking: measuring whether interventions are actually moving the needle on the conditions that matter, rather than relying on proxy metrics like training completion or tool activation rates. And it would enable field-level learning: building a shared evidence base about what drives equitable AI adoption, what intervention strategies work for which populations, and where systemic support is most needed.

What This Looks Like in Practice

At Alpenglow Insights, I've developed a diagnostic tool — AIRE (AI Readiness and Enablement) — that measures the four domains most predictive of AI adoption outcomes: psychological readiness (threat and confidence), motivation and perceived usefulness, cognitive capacity and bandwidth, and organizational enablement. The assessment produces a department-level risk heat map, identifies primary barriers, and generates an actionable intervention roadmap.

The instrument is designed for organizational deployment, but its architecture is built to scale. The same assessment can be administered across multiple organizations to produce cross-organizational comparisons and population-level insights. For a philanthropy seeking to understand the readiness landscape across its portfolio — or to track the impact of its workforce investments over time — a standardized diagnostic creates the evidence infrastructure that currently doesn't exist.

An Invitation

The future of work will not be shaped only by the capabilities of AI systems. It will be shaped by the conditions under which workers encounter those systems — the confidence, the support, the cognitive bandwidth, the organizational clarity that determine whether AI transformation lifts people up or leaves them behind.

Measuring those conditions is not just a diagnostic exercise. It's an equity imperative. And it's work that is best done early, before the patterns we're trying to prevent have already taken root.

If you're a funder, researcher, or policymaker working on the future of work and interested in what standardized AI readiness measurement could look like across a portfolio of organizations, I’d welcome the conversation.

Dr. Wendy Rasmussen is the founder of Alpenglow Insights, where she helps organizations measure and address the psychological barriers to AI adoption. She holds a PhD in Psychological and Quantitative Foundations (Counseling Psychology) from the University of Iowa, served as a Navy psychologist, and is completing her Executive MBA at UC Berkeley Haas School of Business.

Schedule a conversation →

Disclosure: This article was written by Dr. Wendy Rasmussen with generative AI used as an editorial tool for grammar and clarity. All ideas, analysis, and conclusions are the author's own.

Previous
Previous

The Diagnostic Gap: Why Engagement Surveys Can't Tell You Who's Ready for AI

Next
Next

What Is AI Readiness? A Framework for Leaders Managing AI Transformation