Me on AI - In case anybody cares

My take on AI’s cognitive paradox: I explore how it can either erode our critical thinking or cause severe cognitive fatigue, and what this means for the future of adult learning.

Erika Albert

12/4/20255 min read

I love instructional videos, always have, always will. So last week I was experimenting with some AI tools, to generate some short videos of my ideas and blog posts, to those who don’t like to read (no judgement🙄…). And to be honest, I was quite impressed, with what is already available out there. So being very excited about my success, I started annoying all my friends with my videos asking for feedback. One very unexpected feedback that I got was: “Hell must be freezing over, if you started using AI.”

I feel that I need to set some things straight. Unlike most people, who believe AI is a new thing, I have been working with various forms of AI, starting around 2003 during my university studies. Back then, we were using Petri nets to optimise manufacturing processes. Then came fuzzy logic, when we all had already introduced AI to our homes, even if they didn’t call it that, and most even today don’t know about it. Then, when I started working at Siemens VDO around 2004, computer vision was on the rise, where with complex machine learning algorithms we could detect pedestrians, recognise when a driver fell asleep, etc… all contributing to the autonomous systems we see today. One of the best use cases for AI I have seen so far was the one around incidental findings. Imagine you go get an x-ray, because of a broken rib and find out you have early stage breast cancer. It’s something that an orthopedist would have clearly missed, not being trained to detect early tumours, AI can pick up on that, in a nonchalant “just by the way….” fashion. So, whoever might have gotten the impression, that I am not an AI fan. Sorry, but you are wrong. I am not a fan however, of people selling automatisation as AI. There is a difference.

And this is where I start having a problem. Most AI gurus are still stuck on selling you mere automation (doing the same old thing, just faster) while ignoring the actual psychological impact of true AI on how we think and learn.

So what happens, when we let AI do the heavy lifting? There is (just in print) new research out of Wuhan University that confirms exactly what I have suspected. Tian and Zhang (2025) found a direct link between high AI dependence and lower critical thinking skills in university students. The more you lean on the tool, the less you exercise your own analytical muscles.

But here’s the thing. This is not about being “lazy.” The study revealed a paradox that I see constantly in professional settings. If you have high information literacy, meaning you are actually good at evaluating data, using AI can protect your critical thinking skills. You don’t just accept the output. You challenge it. However, this comes at quite a high cost. The same study found that these highly literate users experienced significantly higher cognitive fatigue (Tian & Zhang, 2025). Why? Because constantly monitoring, verifying, and fact-checking a machine that sounds confident but hallucinates is exhausting.

Meanwhile, those with low information literacy face the opposite problem. They don’t get fatigued because they simply accept whatever the AI tells them without question, which means their critical thinking erodes faster. They are pretty much outsourcing their reasoning to a tool that can confidently lie (maybe even better than consultants did before). It’s a “double-edged sword”: if you have the skills to use AI safely, it might burn you out faster than doing the work yourself, but if you don’t have those skills, you won’t even realise you’re degrading your own cognitive capabilities until it’s too late. And this is where I have a problem with AI these days, more precisely with people who promote AI to people with low information literacy. We are giving up our own critical thinking and elevating ChatGPT to the level of a messiah.

But I still believe, AI is a great partner for the future or Adult Education. If we stop treating AI as only “doing more of the same faster”, we will see, it has potential to something that was inconceivable before. It has the potential to break down the “Iron Triangle” of education where we had to choose between Scale, Speed, or Personalisation. By using AI for dynamic profiling and adapting pathways, we can continuously analyse a learner’s performance to build a “living” curriculum that adjusts in real-time. Instead of forcing fifty engineers into the same “Leadership 101” seminar, wasting hours for those who already know the basics, AI instantly identifies that one struggles with conflict while another struggles with delegation, and then re-skins the core material to match their specific reality. The IT manager gets a scenario about a missed deadline, the Sales Director gets one about pricing objections, and critically, the system skips what they already know. It allows an organisation to deploy a thousand unique, role-specific learning journeys simultaneously without needing a thousand instructional designers, ensuring that professionals only spend time learning what they actually need to learn, not what was easiest to schedule for the group.

But, and this connects back to the research, we must design this to manage cognitive load. Because high-quality engagement is fatiguing, these interactions should be short, focused bursts, not marathon sessions.

And finally, we cannot not address the elephant in the room. We are seeing a flood of tools promising to “streamline your business.” But let’s be real. If you outsource your accounting to an AI agent and it hallucinates a tax deduction that doesn’t exist, who pays the fine? You do. If you let an AI bot handle your customer service and it promises a refund you can’t afford, or worse, insults a client, who loses face? You do. The AI company will point to their Terms of Service, which essentially say “for entertainment purposes only,” and you will be left holding the bag.

To wrap this up, here is my simple rule of thumb for navigating this brave new world. Trust AI for ideation, it can help breaks writer’s block instantly. Trust it for creating a summary, but don’t forget to verify the nuances, because it has the tendency to flatten complexity. Trust it for pattern recognition, as we did with those Petri nets. It probably does that better than we would.

But do not trust AI for final decisions. Never let it push the “send” button on anything legal, financial, or sensitive without a human review. Do not trust it for facts without sources. If it says “Studies show…”, ask “Which study?” immediately, because if it cannot tell you, it is lying. If it gives you a study, which might seem plausible, crosscheck it with Scholar if it actually exists, as that too might be made up! And do not trust it for moral or strategic judgment. It does not care about your reputation, your long-term relationships, or your ethics. It only cares about predicting the next likely word in a sentence.

So no, I am not against AI. I am against using it with rose-colored glasses on. Use it for what it is: another tool in your toolbox.

References

Tian, J., & Zhang, R. (2025). Learners’ AI dependence and critical thinking: The psychological mechanism of fatigue and the social buffering role of AI literacy. Acta Psychologica, 260, 105725. https://doi.org/10.1016/j.actpsy.2025.105725