Is technology making us dumber? While the question is far from new, it also arises for generative AI, these tools that generate text, images or videos in response to a query. While this technology has been adopted by millions of users – OpenAI claims nearly 300 million monthly active users of ChatGPT – scientists from Carnegie Mellon University (Pennsylvania, United States) and Microsoft sought to understand whether these tools affected our critical thinking.
Specifically, the researchers wondered how users of ChatGPT, Gemini, Le Chat and DeepSeek used their critical judgment, this ability to question or challenge the results generated by these conversational agents. And their conclusion is unequivocal: the more people use this technology in their work, the less they exercise critical thinking, they write in their study published in early February.
For the latter, the use of conversational agents like ChatGPT, Le Chat and DeepSeek is not without consequences: used incorrectly, they can “lead to the deterioration of cognitive faculties that should be preserved”, leaving our critical thinking “atrophied and unprepared”.
The more a user believes that generative AI generates good results, the less they use their critical judgment
To arrive at To reach this conclusion, the researchers asked 319 adults (aged 18 to 55) from different professional sectors and from five countries – the United Kingdom, Poland, the United States, Canada and South Africa – to answer several questions. All of these people had in common that they used generative AI at least once a week in their work, whether to generate text, images or recommendations: nearly 936 uses were listed.
Once these were identified, the researchers asked the panel of users how much confidence they had in the results generated by the AI tools. They also asked them to what extent they had confidence in their own abilities to perform these tasks autonomously, without generative AI.
As the responses progressed, a pattern emerged: the more confidence a user had in the results generated by the AI, the less they used their critical judgment. Conversely, the less confident the respondents were, the more confident they were in their own ability to assess what AI generated.
This was all the more true if it concerned “low-stakes tasks,” deemed menial, for which these users tended to be less critical. However, these “first renunciations” would not be without risk. Researchers warn of the potential for “long-term overdependence and a decline in the ability to solve problems independently.”
In other words, the more powerful and reliable AI tools become, the more we tend to use them for convenience, time savings, and perhaps even laziness, to the detriment of our cognitive abilities, which we would gradually lose.
AI would deprive us of the moments when our brains “flex”
While AI giants constantly tout the productivity gains AI brings, or present these tools as a way to free ourselves from boring tasks, AI could in practice have a greater impact than expected. It could reduce our ability to tackle the most complex problems when they arise.
“One of the great ironies of automation is that by mechanizing routine tasks and leaving exception handling to the human user, you deprive the human user of the usual opportunities to exercise judgment and build cognitive muscle, leaving them atrophied and ill-prepared when exceptions arise,” the researchers write.
The authors of the study are not convinced that AI should be sidelined—which would have been surprising since of the seven researchers, six come from Microsoft, a major player in the current AI race. For them, their study can help AI developers design tools that emphasize the possibilities of “practicing critical thinking,” “stimulating development and preventing atrophy.” In other words, we should design AI systems that would make our brains work, like tools that would encourage us to question the responses generated.
0 Comments