Blog,Artificial Intelligence,HR Management

When AI Becomes a Colleague: What CHROs Must Get Right Before It’s Too Late

What's Inside

AI is reshaping the workplace — but the real challenge isn’t the technology. It’s whether your culture, leadership, and people strategy are ready to absorb it. This blog unpacks how leaders from the BBC, Allianz, Repsol, and the University of St. Gallen are building human-centred AI cultures, scaling adoption with rigour, and rethinking what engagement looks like when algorithms become teammates.

Most organisations now accept that artificial intelligence will reshape work. Far fewer have grappled with a harder question: what happens to culture, identity, and collaboration when AI stops being a tool people use and starts being a presence people work alongside? That distinction—between deploying a technology and integrating AI as a colleague is where many HR strategies quietly fall apart. 

The gap is not one of adoption. Generative AI tools are spreading rapidly through functions from recruitment to finance. The real gap is organisational. Roles are shifting beneath people’s feet. Decision-making authority is blurring. Employees are asking themselves, sometimes consciously and sometimes not, what their contribution actually means when an algorithm can draft, analyse, and recommend faster than they can. For CHROs, this is no longer a technology question. It is a question about the psychological and structural foundations of work itself. 

Why the boardroom can no longer treat AI as an IT initiative

When generative AI entered the mainstream, most leadership teams framed it as an efficiency play—automate routine tasks, reduce costs, accelerate output. That framing was useful at the start. It is now dangerously incomplete. 

The business impact of AI adoption increasingly depends on factors that sit squarely in the CHRO’s domain: employee trust, capability readiness, managerial confidence, and the resilience of organisational culture under sustained change. Companies that treat AI as a procurement decision handed to IT are discovering that adoption stalls, engagement erodes, and the promised productivity gains fail to materialise at scale. 

Angelika Inglsperger, Group Head of People and Strategy at Allianz, has framed this as a question of strategic enablement. Her team’s approach—centred on leadership enablement, structured employee upskilling, and the deliberate establishment of AI champions across the business—reflects a growing conviction that AI adoption is an enterprise capability to be built, not a tool to be rolled out. At Allianz, that logic has extended into HR’s own operations, including a global deployment of AI in talent acquisition that has served as both a proving ground and a credibility signal to the wider organisation. 

At board level, the conversation is shifting in a similar direction. Directors want to know not just what AI can do, but whether the organisation is ready to absorb it—whether middle managers can lead through ambiguity, whether teams can adapt their workflows without losing cohesion, and whether the company’s talent strategy accounts for the fact that many roles will look fundamentally different within two to three years. These are workforce questions, not technology questions. And the CHRO who cannot answer them with clarity and evidence is losing strategic ground. 

From anxiety to agency: the culture question underneath the technology 

One of the most underestimated barriers to AI adoption is not technical resistance but emotional resistance—a pervasive, often unspoken anxiety that the technology is a prelude to displacement. Where this anxiety takes root, adoption stalls regardless of how capable the tools are. 

Uzair Qadeer, Chief People Officer at the BBC, has been particularly direct about this challenge. He argues that organisations need to move beyond what he calls “AI anxiety” and build cultures where people feel empowered, not replaced. In his view, the unlock is not better technology but better leadership—leaders who model trust, create space for experimentation, and invest in the human behaviours that allow AI to amplify creativity and accelerate learning rather than diminish confidence. His vision is one where technology helps people do the best work of their lives, but only if the cultural conditions are deliberately created to support that outcome. 

This resonates with a pattern visible across industries. Organisations that have moved beyond pilot programmes report a consistent finding: adoption scales only when employees trust that AI is being introduced to augment their work, not to make them redundant. This trust cannot be manufactured through a communications campaign. It requires visible leadership commitment, transparent governance, and genuine opportunities for people to shape how AI is used in their own workflows. Where trust is absent, employees engage in quiet resistance—using AI superficially or not at all, while telling managers what they want to hear. 

Measuring what matters: from pilot enthusiasm to proven impact

Too many organisations are flying blind on AI impact. They know they have deployed tools. They can count licences and logins. But they cannot quantify the effect on efficiency, quality, or employee experience with any real confidence. Without that evidence base, AI strategy becomes a series of hunches—and CHROs lose the ability to make a credible case at board level for what is working and what is not. 

Guille Lorbada, Head of New Ways of Working at Repsol, has tackled this gap head-on. His team has conducted one of the most rigorous large-scale studies of generative AI impact in a corporate setting, moving from pilot to enterprise adoption through a disciplined measurement approach that combines controlled experiments, employee surveys, qualitative research, and granular usage data. The results have shown measurable improvements across efficiency, output quality, and employee experience—but just as importantly, the process has surfaced the strategic conditions needed to turn AI from a promising experiment into sustained organisational transformation. It is the kind of evidence-led rigour that most HR functions still lack, and that boards increasingly expect. 

The lesson is not that every company needs Repsol’s exact methodology. It is that without a serious commitment to measuring AI’s impact on how people work—not just whether they use the tools—organisations are making consequential bets in the dark. 

When algorithms become teammates: the identity and collaboration challenge

As AI takes on tasks that were previously core to certain roles—drafting analysis, summarising data, generating first-pass creative work—employees are confronting an uncomfortable question about professional identity. If the task that defined my expertise is now handled by a machine, what is my value? This is not a soft concern. It directly affects motivation, retention, and the willingness of high performers to invest in their own development. 

Most team structures and performance frameworks were designed for human-only collaboration. Adding AI into daily workflows creates new tensions: who is accountable when a decision is informed by an algorithmic recommendation? How should managers evaluate work that was substantially assisted by AI? What does “high performance” mean when output quality depends partly on how skilfully someone prompts a model? 

Heidi Friedrichs, Director of Organizational Evolution in Executive Education at the University of St. Gallen, has been exploring precisely these questions at the intersection of research and practice. Her work examines what happens when AI moves from background utility to active participant in daily work—how it reshapes the way employees understand their roles and contributions, what engagement looks like when algorithms share the workload, and how organisations can ensure that human judgment and creativity remain central rather than peripheral. These are the questions most organisations have not yet structured an answer to, even as the reality overtakes them. 

The capability gap compounds the problem. Early adopters are pulling ahead. Employees and teams that embraced AI tools early are becoming significantly more productive, while those who hesitated are falling further behind. This is creating a new form of inequality inside organisations—one that cuts across seniority, function, and geography. Without deliberate intervention, the gap compounds. And it is the CHRO, not the CTO, who must design the learning architecture to close it. 

What CHROs should be reconsidering now 

The implications are both strategic and operational. Several areas deserve particular scrutiny. 

Leadership development needs a new chapter. The emerging reality asks leaders to manage human-AI systems—to make judgment calls about when to rely on algorithmic output, when to override it, and how to maintain team confidence through a period of deep role ambiguity. As Dave Ulrich has put it in his recent post: “ AI algorithms access information and improve efficiencies, yet human ingenuity (HI) remains paramount for progress toward impact. “ Leadership enablement for AI transformation is not a module to add to an existing curriculum. It requires rethinking what leadership competence means in an augmented environment. 

Upskilling must be embedded, not bolted on. The organisations seeing the strongest adoption results are those embedding learning into the flow of work—creating AI champions within teams, building peer-to-peer learning networks, and designing work itself so that AI use is a natural part of task completion rather than a separate skill to acquire. AI will not necessarily replace empoloyees in many jobs – people with skills of AI will replace employees. 

Employee experience must be redesigned to adhere to current realities. Employee surveys were not designed to capture the nuanced ways AI is affecting how people experience their work. CHROs need to develop new listening mechanisms that detect shifts in autonomy, purpose, and belonging. The question is no longer “are employees engaged?” but “do employees trust their judgment and creativity still matter?” 

The HR function itself must go first. Artificial Intelligence offers a tremendous opportunity to rethink HR Operating models, as part of the evolving HR agenda. Global rollouts of AI in talent acquisition, workforce planning, and people analytics are already demonstrating what is possible when HR uses its own domain as a proving ground. But this requires honest assessment of where AI adds genuine value in HR processes and where it introduces risks—particularly around fairness, bias, and the quality of human decisions that follow from algorithmic inputs. 

Continuing the conversation in Porto

These themes—trust, capability, measurement, identity, and the evolving nature of human-AI collaboration—will be explored further during a dedicated elective track at the HR World Summit in Porto. The track, AI in the Office: Friend, Foe, or Fellow Teammate?, brings together senior HR leaders and researchers to exchange perspectives on the practical and strategic challenges of integrating AI into the fabric of organisational life. For CHROs navigating these questions in real time, it offers a focused space to test assumptions, compare approaches, and sharpen their thinking on what may be the defining workforce challenge of the next decade.

Popular News & Articles

Blog,Artificial Intelligence,HR Management
Blog,Employee Engagement,HR Management
Blog,Cultural Transformation,HR Management