Written by Lynn Loo, MC, RCC

Let’s be honest, AI is quick and simple to use. Life can be hectic and overwhelming. AI systems such as ChatGPT, provide an immediate answer to what you are looking for. From instant dinner recipes, to more complex personalized fitness plans, to whimsical pictures and enhanced photos to connect with people, AI seems to be able to do it all.

But aside from the instant gratification, what exactly is AI and what are some other effects?

AI stands for artificial intelligence. You may not even realize that you are already using AI, but AI has been around since 1955 and includes systems such as Siri and Alexa. There is no doubt that it has achieved many great things. You hear stories of how a parent used ChatGPT to diagnose their child’s medical condition that no doctor had been able to catch, or how it is being used in India to prevent elephant deaths on railway tracks. However, there are other consequences to using AI, and these affect both the individual and society as a whole.

Biological Effects

Thinking is continuous and adaptive. Studies of long-term ChatGPT users showed lower critical thinking and cognitive skills along with impacts on memory and attention than those who did not use AI. Cognitive offloading reduces the need for the brain to create new neural pathways for new skills. While AI may be helpful in the short-term, repeated AI use sees a reduction in neural connectivity, leading to reduction in creativity, memory recall, semantic processing, and independent critical thinking.

Ethical Concerns

AI are essentially data mining centres. Every time you upload information into ChatGPT, it accumulates and stores your data in these centres to use in the future. While some legal AI tools are built on existing legal databases to ensure more accurate information, not all platforms have the right information and will predict answers, often with incorrect results. AI prioritizes user engagement through validation and mirroring, not therapeutic intervention or reality. For those who seek ChatGPT for advice, it is designed to tell you what you want to hear.  This confirmation bias is not only dangerous in theory, but in cases of what is also referred to as ‘AI psychosis’, can have serious real-life consequences. In some examples, individuals have reported falling in love with AI chatbots or becoming paranoid to the point of hospitalization.

Therapy is founded on ethical frameworks, safety, and the therapeutic relationship between client and therapist. Therapists are bound by regulations and therefore accountability, offer genuine empathy and witnessing, and are trained to handle complex, deep-rooted trauma. While AI may mimic empathy, it lacks human intuition and the ability to pick up on nuanced emotional cues and body language. In other words, it lacks the relational therapeutic depth necessary for long-term deep healing. Furthermore, AI systems are trained on data leading to very real concerns around privacy, theft of work, and confidentiality issues. Once your data is uploaded, it is out of your hands.

Environmental Costs

Many people may not realize it, but the data centres used for AI require large amounts of resources such as land, minerals, and water. Not only are data centres built mainly in marginalized communities (further compounding social inequalities and ways of living), but the microchips that power AI require rare earth elements which are often mined in environmentally destructive ways. They also produce electronic waste containing hazardous substances such as mercury and lead. Another example of environmental impact is that one request made in ChatGPT consumes 10 times the electricity of a Google search, and roughly one bottle of water. According to the International Energy Agency, data centres around the world consumed around 140 billion litres of water just for cooling in 2023 alone. With only 3percent of Earth’s water being freshwater, this is alarming for long term sustainability.

 

There is no doubt that AI has much potential. In the end, it comes down to one question. Is ethical AI possible and how will you choose to connect with it?

 

References:

Holohan, Meghan. (2023, Sep 12). A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the diagnosis. https://www.today.com/health/mom-chatgpt-diagnosis-pain-rcna101843

Gerlich, Michael. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinkig. Societieshttps://doi.org/10.3390/soc15010006

Prasanth, S. (2024, May 20). Tamil Nadu: These elephants are dying on rail tracks – can AI save them? BBC. https://www.bbc.com/news/world-asia-india-68988189

Sobowale, Julie. (2024, April 16). Generative AI in law: The good, the bad and the ugle. https://nationalmagazine.ca/en-ca/articles/law/in-depth/2024/generative-ai-in-law-the-good,-the-bad-and-the-ugly

UN Environment Programme. (2025, Nov 13). AI has an environmental problem. https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about

Wei, Marlynn. (2025, Nov 27). The Emerging Problem of “AI Psychosis”. https://www.psychologytoday.com/ca/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis