Changing the TUNE: Reframing the role of Change Agents in AI Adoption

Laura Ansloos

April 30, 2025

The workers aren’t ready

We’ve noticed that, while organisations are eager to embrace the potential of AI, actual employee uptake tells a different story. According to Gallup’s latest study on AI adoption, 93% of Fortune 500 CHROs say their organisation has begun using AI tools and technologies to improve business practices. However, most employees remain unaware of these efforts - only 33% of surveyed employees say their organisation has begun integrating AI into their business practices.

This example of the contrast between appetite and uptake shows that AI adoption isn’t just about making tools available - it’s about embedding AI into how people perceive and approach their work. AI is fundamentally different from any other technology we’re used to working with. This means that traditional L&D or change strategies will struggle to help employees to get to grips with AI’s transformative brilliance, its unpredictability, and the tsunami of risk, opportunity and emotion this creates.  

Many AI adoption strategies focus on building employee knowledge of how to use AI tools (e.g. prompt parties). However, sustainable AI adoption requires a broader appreciation of how AI is reshaping organisations and the nature of work itself. These changes bring wide-ranging implications for an individual’s sense of identity, ethical stance and motivation. Simply knowing how and when to use AI does nothing to address these deeper human concerns.

Turn and face the change

One approach organisations take to go beyond training to drive AI adoption is the Change Agent model - identifying small, but highly influential, groups of employees who can personally advocate for AI’s benefits.  

Our clients often instinctively recognise that influential people can accelerate adoption. Human behaviour research supports this: we naturally look to others for cues on how to behave, and this extends to individual influencers. The “messenger effect” describes our tendency to trust and follow the behaviours of perceived credible authorities. This means that we are more likely to engage with AI when we observe respected colleagues leading the way.

Similarly, research on trending minority norms suggests that even if only a small group of people are engaging in a behaviour, if that number is increasing, others are likely to follow suit. All of this adds up to mean that Change Agents, when carefully selected and properly supported, can play a powerful role in new technology adoption.

Despite this, what we often hear from our clients is:

“They started out with energy and enthusiasm but fizzled out over time.”

“They’re often met with scepticism and raised eyebrows, struggling to make a lasting impact.”

So, if evidence suggests Change Agents can play an important role in AI adoption, why might they fail to gain traction?  

From cheerleaders to challenge navigators

The answer is that while organisations understand the value of influential messengers, they don’t always understand what actually makes them effective or how to optimise their role.

Change Agents are often positioned as AI evangelists, tasked with generating enthusiasm rather than addressing real barriers to adoption. And this is where the challenge arises. Successful Change Agents don’t just advocate for AI - they help employees overcome the real & perceived barriers that make adoption feel difficult. This means we need to reimagine the role of Change Agents - not as AI cheerleaders, but as change navigators supporting employees over adoption obstacles.

Changing your TUNE

Research suggests several key factors that influence intention to adopt AI, particularly in its early stages.  

Drawing on this, we’ve developed TUNE - a memorable, evidence-based framework that helps Change Agents align their efforts with the core psychological and behavioural drivers of AI uptake. TUNE stands for:

  • Trusted – Trust is uniquely critical to AI adoption and is built across a number of dimensions: functional reliability, ethical assurances & oversight, and human-like connection. (Gillespie et al., 2023; Bedué & Fritzsche, 2022). To trust AI,  employees need a reliable output, transparency about how AI works and proper safeguards for data privacy and bias. In addition, AI is distinct from other technologies in that trust is built through human-like factors: working with good intentions and in consideration of best interests. However, it is key that employees do not blindly trust AI as this can erode critical thinking capacity. Employees must trust AI enough to effectively value it but remain critical enough to question its outputs.
  • Useful – AI must solve real problems for employees and provide clear, individual benefits. However, AI benefits aren’t just limited to efficiency or productivity gains. Linking AI skills to career pathways, for example, can also highlight the “usefulness” of developing skill in working with new technologies. And don’t forget that “losses loom larger than gains” - what would the employee stand to lose (reputation, career potential, professional standing?) if they don’t start integrating AI responsibly into their day-to-day?
  • Normalised – We’re social beasts and are more likely to adopt AI when we see others we admire, like and trust doing so. If reaching for AI tool feels “icky” or out of kilter with professional standards or peer norms, we are unlikely to want to go against the grain.
  • Easy – AI adoption should feel natural and seamlessly integrated into workflows. If AI tools feel clunky or unintuitive, employees won’t engage. Also, as impressive as AI tools are, their unpredictable and emergent nature introduces risk. If employees feel that failure or experimentation is discouraged, they may perceive the risks of using AI as greater than its benefits - making adoption a much harder choice.

When designing Change Agent programmes, organisations can use TUNE to direct the small, visible, everyday actions that Change Agents can take to influence the psychosocial factors of adoption. A Change Agent influence strategy that follows TUNE directs efforts that are targeted at real behavioural drivers - not just at “selling” AI. It also clarifies their role, making it easier for them to take meaningful action.

For example, let’s take Trusted. Trust is critical to AI adoption and is built across a number of dimensions as highlighted earlier. Employees must trust AI enough to effectively value it, but remain critical & “in the loop” enough to interrogate and influence the overall output. Here is an example of where Change Agents can help with addressing trust and critical thinking when working with individuals or groups:

  • Working at an individual level: meet 1:1 with a senior leader to walk through workflows in which AI will potentially be transformational - explaining its logic, sharing opportunities & limitations, and encouraging reflection on when and where human oversight is necessary.  
  • Working at group level: Host “The Human in the Loop” team sessions where the agent is equipped to provide techniques to support critical thinking when assessing AI-generated outputs. This session may include case-based challenges, “what would you ask next?” scenarios, or debate-style simulations to help teams practise judgement and better questioning.

Trying to drive AI adoption in your organisation?

We help organisations move beyond training sessions and awareness campaigns to focus on real behavioural change. By designing AI adoption strategies that are human-centred, evidence-based, and focused on impact, we help businesses integrate AI in ways that truly work for their people.

Contact us to learn more about how we can support your AI adoption journey.

Similar articles

Liked what we had to say? Read more articles below.

Check out our latest insights, ideas and research.