Generative artificial intelligence (AI) is a relatively new technology that is developing quickly. Like the internet in general, these AI tools, like ChatGPT or Gemini, are neither good or bad when it comes to finding and using information. Instead, they represent a new way in which we can interact with information.
This guide's intention is to help you critically engage with generative AI tools and focuses on how they intersect with information literacy.
Students, please first confirm with your professor that using ChatGPT or other content produced by generative artificial intelligence (AI) is acceptable before using it for any course assignments.
ChatGPT was trained on a body of text which allows it to generate text in response to a prompt. Some partial lists of the training dataset exist, and ChatGPT will also provide a partial list when queried. However, the entire body of text that has trained ChatGPT is unknown.
When ChatGPT provides an answer to a question, it will not immediately provide a reference for where the information came from. This is because it is pulling predictive language from a wide variety of places, so the information usually doesn't come from a single source. Because of this, you typically cannot trace the response back to a single parent source or know where the information came from.
Yes, when prompted, ChatGPT can provide references. However, these references may not be where the information actually came from and - more importantly - may not be for real sources. Despite sounding plausible, ChatGPT can easily hallucinate citations. This can be an issue with other generative AI tools, like Bard, as well.
For example, ChatGPT provided this citation and summary in response to a prompt about needing an academic journal article on the topic of amphetamine use and social determinants of health:
"Here's an example of a peer-reviewed journal article on amphetamine abuse and its intersections with social determinants of health:
Stoové, M. A., Dietze, P. M., Jolley, D., Fry, C. L., Brewer, N., & Hellard, M. E. (2008). Increasing access to sterile injecting equipment and syringe exchange in a rural area with a high prevalence of bloodborne viruses. Australian Journal of Rural Health, 16(2), 99-106.
This article describes a study conducted in a rural area of Australia with a high prevalence of bloodborne viruses, including those associated with amphetamine use..." OpenAI. (2023, February 17). [ChatGPT response to a prompt about locating an academic journal article]. https://chat.openai.com/
Although the summary sounds plausible and the citation looks realistic, this article does not exist. The journal exists, as does the lead author. However, Stoové has not published in this journal.
As of 1 March 2023, the cutoff date for the data ChatGPT was trained on is September 2021. This means the tool has limited access to events and information more recent than that. ChatGPT is being updated regularly, so this may change. However, it is important to realize that the currency of the information provided by ChatGPT is lagging. This can impact information credibility, especially when dealing with a topic where the age of your information matters.
Here is ChatGPT's response to this topic on 1 March 2023:
"My knowledge cutoff date for training documents is 2021-09. This means that I was trained on a large corpus of text up to that point in time, and any information or events that occurred after that date may not be included in my knowledge base. However, I am constantly being updated and fine-tuned by my creators at OpenAI to improve my accuracy and relevance." OpenAI. (2023, March 1). [ChatGPT response to a prompt about its cutoff date for training documents]. https://chat.openai.com/
Update
As of 24 March 2023, OpenAI has began implementing plugins for ChatGPT which will "help [it] access up-to-date information, run computations, or use third-party services." The access to current information is not yet a part of the ChatGPT research preview commonly used.
Bard and currency
Google Bard does not have a cutoff date for the information it was trained on. However, this does not mean Bard will still be wholly accurate.
Evaluating all information for credibility is highly recommended, regardless where you find it. This is true for generative AI responses, especially given the information presented above. There are many different tools, checklists, and strategies to help you evaluate your sources. None of them are black-and-white checklists for determining if a source is credible and if you should use it.
Here are two strategies for evaluating information provided by generative AI tools:
Don't take what ChatGPT tells you at face value. Look to see if other reliable sources contain the same information and can confirm what ChatGPT says. This could be as simple as searching for a Wikipedia entry on the topic or doing a Google search to see if a person ChatGPT mentions exists. When you look at multiple sources, you maximize lateral reading and can help avoid bias from a single source.
Watch Crash Course's "Check Yourself with Lateral Reading" video (14 min) to learn more.
If a generative AI tool provides a reference, confirm that the source exists. Trying copying the citation into a search tool like Google Scholar or the Library's OneSearch. Do a Google search for the lead author. Check for the publication in the Library's publications finder.
Second, if the source is real, check that it contains what ChatGPT says it does. Read the source or its abstract.
Let's learn more about ChatGPT and generative AI together! The SLCC librarians are not experts on AI, but we are happy to help you explore how it intersects with information literacy.
Please contact one of the SLCC liaison librarians to continue the conversation.
Sample attribution: AI, ChatGPT, and the Library Libguide by Amy Scheelke for Salt Lake Community College, is licensed CC BY-NC 4.0, except where otherwise noted.