There is a version of AI adoption that looks straightforward when you are planning it in a boardroom. You identify the use cases, you run a pilot, you demonstrate ROI, and then you roll it out. The playbook is clean. The slides look good. And then you meet the organisation.
I have been part of AI and automation programmes in several environments. The one that taught me the most -- the one that forced me to rethink almost everything I thought I knew about change management -- was a retail operation with just over 40,000 employees across a large network of stores, distribution centres, and support functions.
Scale changes the problem. Not just in volume, but in kind.
The fear is not irrational
The first thing I learned is that the fear your workforce has about AI is not irrational, and it is a mistake to treat it as though it is. A 25-year-old in a head office analytics role who is excited about AI copilots and a 47-year-old who has worked in the same distribution function for 18 years have genuinely different stakes in this conversation. Treating their concerns as equivalent, or worse, dismissing the latter's concern as resistance to change, destroys the trust you need to make adoption work.
What we found, consistently, was that the employees most resistant to AI were not resistant to technology in general. They were resistant to uncertainty. They did not know what AI adoption meant for their role, their team, or their future with the organisation -- and nobody had told them in terms they could believe.
"We will communicate the vision" is not the same as answering the question: what does this mean for me? That question has to be answered honestly, at the individual and team level, before adoption can progress. At 40,000 people, that is an enormous undertaking. It is also non-negotiable.
Middle management is the critical layer
In a workforce of that scale, the people who determine whether AI adoption succeeds are not the senior leaders who sponsor the programme, and they are not the employees who will eventually use the tools. They are the layer in between -- the floor managers, the team leaders, the people who handle the daily operational rhythm that the programme is trying to change.
We made the mistake early on of designing the programme for two audiences: executives and frontline employees. Middle management was treated as a communication channel, not as a stakeholder group with their own concerns and their own stake in the outcome. That was an error.
A floor manager who does not understand the AI tools their team is being asked to use -- who cannot answer the questions their team will bring to them, and who privately has their own concerns about what this programme means for their role -- is not a neutral actor. They become a passive or active obstacle to adoption, often without meaning to.
Once we redesigned the programme to treat middle management as the primary change management audience -- giving them deeper briefings, involving them in the design of the rollout, and explicitly addressing their concerns before those of the broader workforce -- the pace of adoption changed meaningfully.
Pilots are necessary but insufficient
Every AI adoption programme runs pilots. Ours did too, and they were genuinely useful. But a pilot in three stores tells you almost nothing about what rollout to 400 stores will look like. The stores you choose for a pilot are not representative. The teams involved are typically more engaged, more willing, and more closely managed than the average. The conditions that made your pilot work will not automatically replicate.
What we should have done, and eventually did, was design the rollout as a learning programme, not a deployment programme. Each successive wave of stores was designed to generate insight that improved the next wave. That requires a different mindset than a classic technology rollout, and it requires honest leadership that is willing to say "we learned something in wave two that we are changing for wave three." Most organisations find that difficult.
The data quality problem is worse than you expect
This is the lesson that nobody in a vendor demo ever mentions. AI adoption at scale surfaces data quality problems that you did not know you had, because the systems you are building on were never asked to perform at this level before. In a retail environment, that manifests in product data inconsistencies, store-level configuration variations, and historical records that reflect how work was done five years ago rather than how it is done now.
Building your AI programme on that foundation is possible, but it requires honest discovery upfront. The organisations that skip that step spend months in remediation after launch. The organisations that do it properly -- even when it is uncomfortable and takes longer than anyone wanted -- come out the other side with AI tools that actually work in production.
What actually made it work
At the end of a programme like this, you are looking for the factors that made the difference. In our case, the things that mattered most were not the technology choices. They were: leadership that communicated honestly and consistently, even when the news was not good; a middle management community that was treated as a genuine partner in the programme; a willingness to slow down when waves were not working as expected rather than pushing through; and a workforce that, when they understood what was actually being asked of them, mostly wanted to make it work.
The technology was important. But it was not where the programme succeeded or failed. That is the lesson from 40,000 people. And it applies at every scale.