Skip to Main Content

AI, ChatGPT, and the Library

This guide provides a starting place for SLCC faculty, staff, and students to learn how generative AI intersects with information literacy and how to use it responsibly.

Welcome to the SLCC Libraries guide on AI and Information Literacy!

Generative artificial intelligence (AI) is a relatively new technology that is developing quickly. Like the internet in general, these AI tools, like ChatGPT, Gemini, or Copilot, are neither good or bad when it comes to finding and using information. Instead, they represent a new way in which we can interact with information.

This guide's intention is to help you critically engage with generative AI tools and focuses on how they intersect with information literacy.

Students, please first confirm with your professor that using generative AI tools or their output is acceptable before using it for any course assignments.

Generative AI and Information

Where does the information come from?

Large language models (LLMs), like ChatGPT, are trained on a large body of text which allows them to generate text in response to a prompt. In general, the entire training datasets for the major LLMs are unknown, although some partial lists exist.

When a generative AI tool provides an answer to a question, it typically will not provide a reference for where the information came from. This is because it is pulling predictive language from a wide variety of places, so the information usually doesn't come from a single source. Because of this, you typically cannot trace the response back to a single parent source or know where the information came from.

Many generative AI tools have web searching capabilities. This allows the tools to point to websites that ought to contain information similar to what they provided. For instance, as of early August 2025, Perplexity provides links to sources before providing output and has a "Sources" tab. Similarly, Google's AI Overview includes a list of websites that it connects to its output. Regardless, assuming that the AI's output is directly sourced from those websites is not advisable.

Can generative AI tools provide references?

Yes, when prompted, many generative AI tools can provide references. Some do so without asking (like Perplexity, as mentioned above). However, these references may not be where the information actually came from and - more importantly - may not be real sources. Despite sounding plausible, generative AI tools can hallucinate citations

For example, in 2023 ChatGPT provided this citation in response to a prompt asking for an example of an article on a particular topic:

"Here's an example of a peer-reviewed journal article on amphetamine abuse and its intersections with social determinants of health:
Stoové, M. A., Dietze, P. M., Jolley, D., Fry, C. L., Brewer, N., & Hellard, M. E. (2008). Increasing access to sterile injecting equipment and syringe exchange in a rural area with a high prevalence of bloodborne viruses. Australian Journal of Rural Health, 16(2), 99-106. 
This article describes a study conducted in a rural area of Australia with a high prevalence of bloodborne viruses, including those associated with amphetamine use..." OpenAI. (2023, February 17). [ChatGPT response to a prompt about locating an academic journal article]. https://chat.openai.com/

Although the summary sounds plausible and the citation looks realistic, this article does not exist. The journal exists, as does the lead author. However, Stoové has not published in this journal.

Note that LLM developments and upgrades to generative AI tools may make hallucinations less common. Regardless, it is recommended to verify that the sources, first, exist, and second, are accurate and credible.

Checking Generative AI Output for Credibility

Evaluating all information for credibility is highly recommended, regardless where you find it. This is true for generative AI responses, especially given the information presented above. There are many different tools, checklists, and strategies to help you evaluate your sources. None of them are black-and-white checklists for determining if a source is credible and if you should use it.

Here are two strategies for evaluating information provided by generative AI tools:

1. Lateral Reading

Don't take what a generative AI tool tells you at face value. Look to see if other reliable sources contain the same information and can confirm what the tool says. This could be as simple as searching for a Wikipedia entry on the topic or doing a Google search to see if a person ChatGPT mentions exists. When you look at multiple sources, you maximize lateral reading and can help avoid bias from a single source.

Watch Crash Course's "Check Yourself with Lateral Reading" video (14 min) to learn more.

2. Verify Citations and Look Deeper

If a generative AI tool provides a reference, confirm that the source exists. Trying copying the citation into a search tool like Google Scholar or the Library's search tool Summon (search box on the Library's homepage). Do a Google search for the lead author.

Second, if the source is real, check that it contains what the generative AI output says it does. Read the source or its abstract. The sources provided by generative AI tools may not be the best match for your topic or information need. Consider looking for additional sources to expand your understanding of the topic.  

Have Questions?

Let's learn more about generative AI together! The SLCC librarians are not experts on AI, but we are happy to help you explore how it intersects with information literacy. 

Please contact one of the SLCC liaison librarians to continue the conversation.

Sample attribution: AI, ChatGPT, and the Library Libguide by Amy Scheelke for Salt Lake Community College, is licensed CC BY-NC 4.0, except where otherwise noted.