The Curious Case of Gen Z and the Inexorable (?) Rise of AI in Healthcare Information

Why you shouldn’t rely on ChatGPT

Right now, AI applications seem to be everywhere — although not all AIs are created equal, and each has its own specialism. One of the best-known types is the large language model (LLM), such as ChatGPT, Claude, Gemini, and Grok, among others. These systems are trained on immense amounts of data, which allows them to generate text, perform machine translation, and summarise academic papers, as just a few examples.

When it comes to healthcare information, we need to make sure that we advise our users to proceed with caution, particularly given how alluring rapid advances in health and medicine can appear. When it comes to AI-generated content, we’re now encountering a whole new generation that has never known a world without the internet and who can be less than critical of what they see, read, or even create. It is, of course, fair to point out that it isn’t only Generation Z who can fall into an AI rabbit hole. 

After all, there are those who have long regarded anything printed in or broadcast on mainstream media — BBC, CNN, Fox, and others — as representative of the truth. This is despite decades (if not centuries) of evidence to show that this is not the case. With the advent of AI large language models (LLMs), what look like plausible answers to thorny questions are now being generated in seconds.

As with all information, it can be naïve, if not downright dangerous, to accept the generated responses without applying any critical thinking to the origin of the data or the agenda behind the piece.

This uncritical approach can often be seen in reactions to social media content and, lately, AI-generated material. This is despite AI having been shown on many occasions to hallucinate, which is when it literally makes up answers to questions and prompts. 

 


Send an email to your Representatives to show your support for libraries!


 

Some AI systems have also shown bias in terms of culture, race, gender, or other characteristics. In addition, LLMs often give different and even inconsistent responses to questions, and when it comes to image generators, issues with limbs and an ‘uncanny valley’ effect are well known, particularly around human limbs and digits.

‘Uncanny valley’ is a term that expresses our unease when we see images that aren’t quite ‘right’ — they’re different from our usual experience. The term has been applied to some of the AI-generated ‘actors’ and presenters that have started to appear, such as Tilly Norwood. 

Of course, it isn’t only AI that can be termed ‘uncanny’. The ‘uncanny valley’ label occasionally gets used to describe our resident programmable mannequin, Sim Man, whose occasional alias is Mr Albert Edwards, after the name of the main hospital in our Health Service Trust.

Using the mannequin requires special training, and he’s used to help clinical staff and students gain an understanding of health conditions and situations in a safe, simulated environment. Recently, Albert found himself contributing to a work experience week for a group of pre-med students from local high schools and colleges.

Although the majority of our library users are clinical staff or students preparing to become clinicians or to advance in their careers, we also have a wider remit. As part of our work to support the Medical Education Department, we were asked to contribute an information and library skills session to the week.

 


Take action today to support libraries!


 

We worked alongside the clinical skills manager, who had put together a backstory for Mr Edwards and prepared clues to help diagnose what was wrong with the ‘patient’, together with colleagues in Medical Education. The library staff organized a short session on health literacy and misinformation, along with a few quizzes and a practical task of finding a book on the shelves in the library. (Yes, that’s an actual, physical artefact. While many of our users enjoy our online resources, a surprising number of patrons still tell us how much they enjoy using ‘real’ books.)

What was most fascinating—and will advise both the next work experience session and our own microteaching sessions going forward—was observing the approach the students took to the tasks. Although we have thousands of resources online and are part of a consortium that shares resources across our region, our physical library is very small. The students’ method of finding information was simply to look along the shelves until they found the book they were looking for, rather than searching the catalogue and locating the book via classmark, as we demonstrated.

This paled, however, in comparison to the discovery made when students were asked about where they obtain their health information. As information specialists, we strongly encourage our users to refer to evidence-based sources that are regularly updated, such as BMJ Best Practice, PubMed, MEDLINE, and UpToDate—even Google Scholar, as long as the articles and reports have appeared in a trustworthy publication. Even Wikipedia entries, although no substitute for a properly referenced and evidence-based summary, at least usually include genuine links that can lead to better articles and, to some extent, the editing and currency can be tracked. 

This pre-med work experience group offered us the perfect chance to find out what Generation Z—or ‘Zoomers’—are using to obtain their healthcare information. After an initial quiz on translating medical terms for various conditions and functions into everyday English, I asked the same question.

I was expecting responses like TikTok, Google, even Wikipedia — but what took me aback a little was that one of the most common responses was ‘ChatGPT’.

 


Sign the pledge to vote for libraries!


 

Generation Z  are ‘digital natives’, completely comfortable in an online world. They’ve never known life without the internet—or social media, come to that. They’re accustomed to having information at their fingertips — and information tailored to their needs. Instantly. 

We’re well aware of AI in the medical library world — some of our clinicians are already using it to create first drafts of documents, to analyse images in radiology, and, potentially, in other disciplines. It’s also used to speed up the dictation of patient notes and letters, among other tasks. Harvard researchers have recently brought an AI on board, Dr. CaBot, which works through medical cases to reach a diagnosis, spelling out its reasoning.

One of the major issues with AI reliability is that its responses depend on the data on which it has been trained. The currency of the data is another concern. Recently, we’ve heard of the ‘walled garden’ approach, where the AI-generated responses are taken from a limited corpus of sources—evidence-based and verified by humans with expertise in the field, such as accounting, law, or medicine.

For a generation already steeped in the online world, being able to distinguish between accurate and fake information will be extremely important. From an information specialist/educator viewpoint, far from our work being outsourced to AI, it looks as if we’re going to have our work around teaching critical thinking in particular cut out for some time yet. . . .

References

BBC. n.d. “A Brief History of Fake News.” BBC Bitesize. Accessed December 2, 2025. https://www.bbc.co.uk/bitesize/articles/zwcgn9q.

Caruso, Catherine. 2025. “An AI System with Detailed Diagnostic Reasoning Makes Its Case.” Harvard Medical School, October 8. https://hms.harvard.edu/news/ai-system-detailed-diagnostic-reasoning-makes-its-case.

‌Ho, Jerlyn Q. H., Andree Hartanto, Andrew Koh, and Nadyanna M. Majeed. 2025. “Gender Biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions.” Computers in Human Behavior: Artificial Humans 4 (May): 100145. https://doi.org/10.1016/j.chbah.2025.100145.

 


 

Visit www.everylibrary.org to learn more about our work on behalf of libraries. 

#librarymarketers: Enjoy this story? Want to use it for your library newsletter, blog, or social media? This article is published under Creative Commons License Attribution-NonCommercial 4.0 International and is free to edit and use with attribution. Please cite EveryLibrary on medium.com/everylibrary.

This work by EveryLibrary is licensed under CC BY-NC 4.0