The hypocrisy of L&D and AI in corporate learning

Corporate L&D is going “AI‑enhanced”, and it’s not just changing training, it’s quietly outsourcing responsibility for real learning. This post unpacks the double standard behind AI in employee development, why “learning” is turning into checkbox compliance, and what gets lost when organizations replace human sense‑making with automated content.

AILEARNING AND DEVELOPMENT

Erika Albert

12/29/20255 min read

So the last year has not been fun for me and my peers. Not like working in adult education has ever been less than a high intensity rollercoaster, this year was special. While employee training was always the first thing to go down the chute whenever the budgets got tight, this year it was taken a step further. Apparently most companies signed up for the 19,99$/month “AI enhanced” employee learning experience, turning most of us into a spreadsheet line of “additional costs saved”.

Trying to keep up with the developments and experiences around AI “enhanced” learning, I came across this Reddit gem:
“I administrate an ongoing learning programme for a large group (300+) of professionals working across the country. In the last year I have seen a huge increase in the amount of AI generated entries in their records. They are supposed to identify personal learning objectives each year, and these are increasingly just generic bullet pointed lists from AI LLM tools. Offloading the task of actually thinking about what they want to achieve to AI, renders the whole exercise pointless in my opinion. (…) I don’t know what, if anything, we could do about this. Generally, I am finding more and more that people are openly resistant to the idea that they should not use AI for certain things. It seems like not that long ago everyone was in agreement that this was not how AI should be used, and rapidly that has changed to ‘Why shouldn’t I?’.”

This post shows a classic case of hypocrisy, along the line of “do as I say, not as I do…” that we are already used to in most corporate environments. But let’s dive a bit deeper in the theory behind this dissonance.

What does moral hypocrisy (Batson et al., 1999), or as we should call it in this case, institutional hypocrisy mean? This is the situation where authority figures (in this case HR, L&D responsible, etc…) condemn a behaviour in others while engaging in the same behaviour themselves. Batson et al. (1999) concluded, that moral behaviour is more often than not, driven less by altruistic concern for others, but more by self-interest mixed with self-deception. People (and institutions for that matter) frequently enhance their own selfish interests with minimal moral compliance, doing the bare minimum to feel moral without truly sacrificing personal benefit. Translated to the case above, they offer the fantastic opportunity for employees to look at talking head videos all day, because it’s simply the right thing to do, to invest in your employees development, while reducing the effort put in the learning journey to the bare minimum. They just have to align and keep being “competitive” in this great big AI marathon we are all running.

Now imagine you’re the employee trapped in this funhouse. You’re sitting through endless town-hall meetings where the leadership is erecting statues to AI, rolling out endless modules of AI enhanced whatevers that are supposed to make your work more efficient, but then somehow when you start using it to make your own life easier, you are frowned upon? Cognitive dissonance hits like a freight train (Festinger, 1957). Your brain juggles two realities. One where AI is a lifesaver for thinking and pouring out work, the other where the same institution brands it cheating. Now, this mental friction demands an immediate resolution, so most people will just rationalise it: “if it’s good enough for them, it’s good enough for me!” Or worse, and even more honest on the employee side: “this is just a tick in a box for them, why should I waste my cognitive resources for feedback nobody ever reads, or cares about?” When your learning journey is nothing more than a spreadsheet entry, true learning is not prioritised, then this type of rationalising on the employee side is not laziness, but straight-up logic in an environment where rules reek of hypocrisy.

Now we’ve known for quite a while, that cognitive offloading is a thing. Risko and Gilbert (2016) describe it as using actions and resources in the world to reduce internal cognitive demands. That being pretty much just moving thinking out of the head and into external infrastructure so the brain can do less work. While everyone’s pointing fingers at employees for “offloading their thinking,” the organisation is doing the exact same thing. Just with a better budget and a nicer UX, calling it a strategic turning point in the new AI assisted reality, instead of it just being an attempt to offload the cognitive and moral burden of actually developing people. This is not “delivering content,” not “tracking completions,” not “generating a feedback report,” but doing the hard, expensive, deeply human work of deciding what the organisation needs to learn, where it is failing, what patterns keep repeating, and what should change. Because that work is messy. It requires effort, judgement, conflict and addressing the uncomfortable parts of the organisation where the real knowledge lives. In its failures, the workarounds, the quick-fixes, the quiet compromises, and the stuff that never made it into any shiny “lessons learned” document.

Risko and Gilbert also make a point that should be printed as a warning label on every AI procurement request. Offloading isn’t just a neutral shortcut. It has downstream consequences. It has the potential to reshape cognition. What gets remembered, what gets put into practice in everyday work and what gets reinforced (Risko & Gilbert, 2016). They describe how offloading can shift memory from “what” to “where”. I don’t need to know the information anymore, I just need to know where to find it (Risko & Gilbert, 2016). And that’s already how many companies treat learning. Not as a way to build capability in employees, but as a lexicon you subscribe to. Need leadership? Click module. Need ethics? Click module. Need difficult conversations? Click module. The organisation stops developing internal capacity and starts developing retrieval habits.

And this slowly develops into a self-reinforcing drift in knowledge. From intrinsic to extrinsic. Once external tools are in place, reliance increases. Internal confidence and internal capability decline, and thus you become even more dependent on the external aid next time (Risko & Gilbert, 2016). As I have come to be unable to drive without my GPS anymore, that’s how slowly the confidence of employees in their own knowledge and abilities is eroding with every procurement loop of the next best thing in “assisted” learning. Facilitation becomes content delivery. Development becomes consumption. And when sh!t hits the fan, as it has the tendency to do so, most organisations discover they weren’t really learning. They’ve been streaming.

So we need to be careful. Risko and Gilbert explicitly point out that offloading can be beneficial in education, when what’s being offloaded is unnecessary to the learning goal (Risko & Gilbert, 2016). In other words, offload the admin, not the thinking. Offload formatting, not reflection. Offload repetition, not judgment. The moment the organisation offloads the very things that make learning real, the sense-making, the uncomfortable dialogue, accountability, practice, feedback that actually changes something, it hasn’t “scaled learning.” It has scaled the appearance of learning.

References

Batson, C. D., Thompson, E. R., Seuferling, G., Ward, P., & Jasper, J. (1999). Moral hypocrisy, appearing moral to oneself without being so. Journal of Personality and Social Psychology, 77(3), 525–537. https://doi.org/10.1037/0022-3514.77.3.525

Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002