Skip to Main Content

Use Generative AI Effectively: Evaluate information found using AI

How to evaluate text generated by AI

Lateral reading is a strategy for fact-checking and evaluating sources, including generative AI (GenAI) responses. Lateral reading involves leaving the AI tool to consult other reliable sources. Instead of relying on an AI tool as your only source for information, try opening up new tabs to verify facts, identify gaps, and find alternative perspectives.

Below are steps you can follow to fact-check information from AI tools.

  1. Break down the information in the AI's response. Identify specific facts or claims made by the AI that you can search online and compare with other sources.
  2. Open a new tab (or multiple!) to search for more information to support the AI's response.
    • If the AI provided links to sources, follow those links to make sure they are real and to verify the AI represented those sources fully and accurately.
    • Think about agencies or organizations that should be experts on your topic (ex. if you are writing about microplastic pollution, an agency that investigates pollution, like the EPA, would be useful). Go to their websites and verify any facts or claims AI made.
    • Use a library database to find reliable sources on your topic and verify any facts or claims AI made.
  3. Reflect on the following questions:
    • Could the content be missing any important information or points of view?
    • Can you answer the basic journalistic questions about your topic using verifiable sources? Do you need to dive deeper on any aspects of the topic?

You can ask a generative AI tool to cite its sources, but it is known to create very convincing fake citations. It can even create citations that have the names of real researchers who study the topic you've asked about. However, the article named in the citation might not exist or may not be from the journal it cites. Look at this transcript from an actual reference chat between a student and a Centre librarian:

 

Because GenAI can hallucinate in this way, you’ll need to search to confirm any article it gives you actually exists. To do so:

  • Search the title of a book in the library catalog or an article in a library database or Google Scholar.
  • Verify not only the title and author, but also journal title/volume, dates, DOI and other information is correct.
  • Verify the content the AI claimed is in the source is actually there and accurately represented.

If you need help verifying a source generated by AI, contact a librarian!

Because many generative AI tools were trained on social media, it is easy to understand how it might amplify certain biases and misinformation.

Even when generative AI is using more scholarly sources, it was trained on mostly Western knowledge. Even when the AI can search for additional sources, as, for example, Perplexity can, it still searches mostly Western sources. Because of this, its answers are very biased towards Western society. Here is a simple example: if I ask any generative AI the same question about a Western artist and a non-Western artist, the response for the Western artist will be significantly more complete and contain more sources.

The sources Perplexity AI used to answer a question about Michelangelo (a famous European painter)

The sources Perplexity AI used to answer the same question about Sesshu Toyo (a famous Japanese painter)

You can see the AI was able to find significantly more sources for the Western artist.

When using generative AI, be aware of its Western bias and actively seek diverse viewpoints.