In the 1990's, when tools like Google were emerging, many people in academia, and especially librarians, were concerned that the internet would diminish people's research skills. Time demonstrated that "research skills" simply evolved into a new set of competencies. Generative AI (GenAI) is causing a similar shift and we are living through its beginning. The "skills" needed to effectively use GenAI are going to change rapidly. Their current state, from the most basic concepts to more advanced strategies and tools, are outlined in the tabs below.
What will never change, and what we must keep in mind as effective researchers (not just users of GenAI), is the human element. GenAI cannot:
These are the skills you learn from a liberal-arts education! Learning to use GenAI does not replace the work you need to do to develop these skills! Here is a clear example of what we mean: In the case of Mata v. Avianca, a lawyer, Steven A. Schwartz, used ChatGPT to assist with legal research for a court filing. He then submitted an affidavit that included citations to six nonexistent legal cases that ChatGPT had hallucinated. Big mistake with serious consequences! Schwartz and his co-counsel, Peter LoDuca, were sanctioned for presenting false information to the court. The moral of this story is that you cannot let GenAI do your work for you.
If you use any of the strategies described on the tabs below, remember: if you find valuable information and you copy/paste it into your research notes, be sure to make some kind of a note remind to remind yourself later that those words were generated by AI. That way, if you use them in something you turn in, you will know you need to either: properly paraphrase them and cite them OR properly quote them and cite them. But just like traditional research, you always need to credit the sources where you learned new facts or ideas and you should be careful that copy/pasting doesn't lead to plagiarism.
One more note: The strategies that follow assume that you are using GenAI to search for information, not to perform other tasks like translating into other languages, creating programming code or designing creative products, like stories or images. Those tasks take an entirely different, though similar, set of skills.
Since we are in the infancy of Generative AI, we begin by defining the basic differences between GenAI / prompts and search engines / keywords.
Prompts are your input into the AI system to obtain specific results. In other words, a prompt instructs AI on what you want it to do. A prompt can be a question or request. It can consist of a few words, a single sentence or even paragraphs.
A search engine, such as Google, works by finding, organizing, and ranking web sources. Search engines retrieve and rank relevant sources by matching the keywords in your search query to the keywords in a webpage.
A research database, such as JSTOR or Web of Science, also retrieves relevant sources by matching them to the keywords or subject terms you enter in a search query. Research databases, unlike general search engines, only search for matches from within their collection of pre-selected, high-quality sources, such as journal articles, often organized around a specific subject area.
An AI text generator, like ChatGPT, generates sequences of words in response to a prompt. Most generative AI tools predict of words should be sequenced based solely on the data they were trained on. They do not look at new, current data* or distinguish between scholarly and popular content. They also do not "understand" what they are generating or check that it is "true."
*Note: Some tools do integrate current search engine results into their responses.
Traditional academic research can benefit from using generative AI to help concept map a broad topic and identify discipline specific keywords to aid in further database searches. To use AI to help you concept map and generate useful keywords:
1. Begin by introducing yourself and your research topic to the AI. Provide a concise description of your role, the subject, and the main focus of your research. For example, "I am a college senior writing a research paper. My topic is..."
2. Pose a question to the AI asking for keywords related to your research. For example, you might ask, "Can you suggest keywords for this topic?" or "What are some relevant keywords for studying ...?"
3. Examine the model's responses and pick out potential keywords. Look for terms that accurately represent your research interests and try to think of synonyms and/or what is missing in the results list, based on your own knowledge of the area.
4. Ask follow-up questions to narrow down or expand on specific aspects of your research. For example, you can ask, "Can you provide more keywords related to ...?" Repeat this process for as many keywords or sub-topics as you'd like.
5. Cross-reference the generated keywords with existing literature in your field (terms in the abstracts / articles you have already found) or a database thesaurus to ensure that your keywords are recognized and accepted within the academic community.
6. Use these keywords to search for scholarly articles in a library database.
The next tabs on this page will give you strategies to formulate the best prompts to produce the most valuable concept maps and keywords when using generative AI.
People accustomed to "traditional" web searching using Google often just ask AI a question. This is technically known as making a "one-shot" prompt. That, obviously, will cause AI to produce an answer, but you will get more appropriate responses if your prompt contains certain elements. Below is a sample prompt. Let's look more closely at how it is constructed.
This prompt contains four elements:
Context - "I am an environmental scientist..." This information helps the model understand the broader scenario, background or specific type of information required. In this example, the context allows the AI to understand its response should be academic and scholarly.
Instruction - "...ways humans ingest microplastics" This is the main goal of the prompt. It tells the model what you want it to do - the topic of the content it must generate.
Output indicators - "List 3-5 ways..." This tells the AI specifically how much output to produce and in what format. You could ask it to list, write 500 words, to summarize provided input, or many other tasks.
Input indicators - 'Use only scholarly articles." This tells the AI what elements of its training it should reference and process when generating output. You could provide specific input, like a link to an article, paragraph, a set of numbers, or even a single word (for translation, for example).
When creating a prompt for GenAI, you should strive to include all four of these elements to get the best response.
The following prompts are more examples of well formed prompts that Centre librarians used (successfully!) for real research assistance (the topics have been altered for privacy). These samples might help you deepen your understanding of what you might include in a prompt and how you might use AI effectively (when such usage meets the parameters of your professors' AI policy).
I am a college history student writing my senior research paper. My topic is: How did Cold War propaganda impact the rise of conformity in the US. Help me find primary source materials. Find posters, pamphlets, news media, speeches and government reports, but avoid film and other fictional sources. Look primarily in digital archives but also include websites and news sources related to the Cold War in the US. List at least 20 sources, explain the value of each to this project and include the link where you found each source.
I am an upper level college student majoring in international studies and researching how international policies impact violence in Africa. Help me find examples of international policies for this topic. Look for a variety of policies from the US, European countries, and Asian countries. List at least 20 policies, summarize for each how they support this research and include the link for each. Search primarily international government websites, international NGO websites and IGO websites.
I am a sociology student in college and I am researching the BIPOC experience in small-college STEM classes. I would like to gather data related to the discrimination people may experience in this environment. Help me research methodologies I might use. Find 5-7 peer-reviewed articles employing different methodologies to gather data for this topic. Provide the citation of each in APA format. Summarize each methodology.
Beyond the basic elements of an effective prompt (context, instruction, output and input indicators), experts have already begun to formalize models for creating effective prompts. On this page, you will find four strategies developed by librarians and AI experts. These models form the basis for the strategies we will suggest on the remaining tabs in this section of the guide (Find basic information, Use a variety of terms, Ask the journalistic questions, Explore deeply, Explore logically, and Use Socratic method).
You will understand why we make the recommendations we do, if you understand these models:
The CLEAR framework for prompt generation was developed by librarian and professor Leo Lo, at the University of New Mexico. CLEAR stands for:
Concise (also Clear) - Focus on the key words for the AI tool to analyze. Try to omit as many needless words as possible.
Logical - Most AI tools look for relationships between words and concepts, so make sure your query presents concepts accurately and in their natural or logical order. If your question doesn’t make sense to you (or to someone else), it probably won’t make sense to the AI!
Explicit - Be clear in what you want from the AI. Giving the AI tool clear output directions can help the AI produce an answer that is useful to you.
Adaptive - Try a second prompt with keywords or topics suggested by the AI in its answer. If the AI tool has seeding or guidance settings, investigate different settings - do you get better results? If the tool allows you to specify words/concepts to exclude or ignore, can you refine your prompt by excluding concepts?
Reflective - Always take a moment to reflect on the AI’s answer. Does it make intuitive sense to you? Does the answer refer to current research (if important for your query), or does it seem based on older research? Has the AI “hallucinated” or returned inaccurate information? Is the answer complete, or are there perspectives or voices unrepresented in the answer? You may need to craft additional prompts that specifically target gaps in the initial answer.
Read more about the CLEAR framework in Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720–. https://doi.org/10.1016/j.acalib.2023.102720
The article, Prompt Engineering: The Art of Getting What You Need From Generative AI, summarizes more models, including:
The Rhetorical Approach
The rhetorical approach to prompt engineering was developed by Sébastien Bauer, an academic at the Universitat Autònoma de Barcelona. This method involves describing your main claim and your rhetorical situation. The prompt may include descriptions of:
The C.R.E.A.T.E. Framework
In the C.R.E.A.T.E. approach developed by AI consultant and author Dave Birss, prompts are framed by addressing the AI as “you.” C.R.E.A.T.E. stands for:
The Structured Approach
The structured approach was developed by Lance Cummings, an AI content specialist and an associate professor at the University of North Carolina Wilmington. Cummings describes the formula for this approach as “The Anatomy of a Prompt:”
Sometimes you only need a simple definition, date, name or other specific, concrete information. Truthfully, the most reliable source for such information is a dictionary or encyclopedia, but many people use GenAI for this purpose. When using GenAI for this purpose, it is common for the prompt to include only the directive and no context or input/output indicators. This is called a shot-based prompt.
When used for definitions, dates, names and other basic information, shot-based prompts can return incorrect responses and therefore, the information provided should be verified with other, reliable sources. When used for more complex questions, shot-based prompts' effectiveness is limited and most prone to producing hallucinations, due to the lack of context and output indicators.
There are several types of shot-based prompts. The chart below gives examples of shot-based prompts, starting with the least reliable type and moving on to strategies to make shot-based prompts more reliable:
Zero-shot prompts - provide the model only the core directive of the prompt with no context, input data or output indicators. This type of prompt is most likely to generate a hallucination, because it does not contain the four basic elements of an effective prompt. |
![]() |
One-shot prompts - a better option, where the language model is given one, reliable example to guide its understanding of a task, allowing it to generate responses based on that single example, essentially acting as a template for similar tasks. Since you are instructing the AI to generate its response based on reliable input that you chose, you are more likely to avoid hallucinations. |
![]() |
Few-shot prompts - the model is given multiple task-specific examples before presenting the actual prompt, allowing the model to recognize patterns. Unlike zero-shot and one-shot prompting, this method provides several samples or ‘shots’ that act as references for the model on the expected structure or context of the response. This method may be useful if you have found several open access articles and you would like to quickly interrogate them to determine if they are worth further reading or if they provide contrasting methodologies or conclusions on a subject, for example. Note: Use caution about uploading a PDF you obtained from a database. Articles in databases are behind a paywall and introducing them to an AI is a copyright violation. |
![]() |
Just like a persistent researcher conducts multiple searches, using a variety of synonyms for their keywords in traditional research, prompt reframing for GenAI means re-asking a question using synonyms or new, but related, keywords. In both traditional research and when working with GenAI, this is valuable because it encourages the tool to produce a variety of responses in different ways, potentially providing more information or additional points-of-view.
In the example below, the original prompt requesting "3-5 impacts on human health" is repeated requesting "3-5 human diseases." Human health and human diseases are obviously closely related terms, but as you see in the screenshot, they cause the AI to produce different results. Both prompts elicit a response related to inflammation, but the second produces additional research possibilities not included in the first.
Asking the journalistic questions to create a concept map has long been used in traditional research during topic exploration. The Gen AI equivalent is known as context expansion prompting. This strategy requires you to enrich the information given to the AI to enhance its understanding of the content you want it to generate. A good way to write context expansion prompts is through the 5 “Ws and How” method, or, in other words, asking Who, What, Where, When, Why, and How questions related to the subject matter.
To prompt the model to begin creating a concept map, a sample prompt might be:
I am a college student researching microplastics. Help me create a concept map of how microplastics impact humans. Consider the following questions: Who do microplastics impact the most? List 2-3 groups of people. What impacts do microplastics have on these people? List 2-3 impacts. Where are microplastics the biggest problem? List 2-3 places. When are microplastics most impactful? List 2-3 times. Why are microplastics significant? List 2-3 reasons. How do microplastics impact people? List 2-3 ways.
Here is a sample output generated by that prompt:
The answers provided are not complete, but they suggest ways a researcher might begin narrowing down this topic. The response suggests directions for deeper research and follow-up inquiries.
Iterative prompting is where you build upon previous responses the AI generated by asking follow-up questions. By doing this, you can dive deeper into a topic, extract additional insights, or clarify any ambiguities from the initial output. For example, after asking the AI "who, what, where, when, why and how" questions, you might follow-up on what you learned with additional questions to learn more about interesting issues that arose.
Suppose you asked the model "How do humans ingest microplastics? Where do humans ingest microplastics?" and you were surprised to learn humans not only swallow microplastics in water, which you knew, but they also inhale microplastic fibers, especially in clothing industry workers. Your follow-up inquiries might look like this:
What are the benefits of using AI in this manner? By performing these fast, iterative prompts, a beginning researcher has:
When approaching a complex problem, humans normally break it down into smaller, more manageable pieces. Then we chain these sub-tasks together, using the output of one sub-task as the input for the next sub-task.
When working with AI, chain-of-thought prompting asks the model to describe the intermediate reasoning steps used to reason its way to a final answer. This is useful for complex tasks that require detailed explanation, planning and reasoning, such as math problems and logic puzzles, where explaining the thought process is essential to fully understanding the solution.
When using AI to assist in research, you will enter all the typical elements of your prompt and conclude with a statement like, "Let's work through this in steps" or "Describe your reasoning step-by-step."
Chain of thought prompting often inspires additional questions / research, thus resulting in the need for iterative prompting. Iterative prompting is similar to chain-of-thought prompting. The fundamental difference between chain-of-thought prompting and iterative prompting is that chain-of-thought prompting presents the reasoning process within a single detailed, self-contained response. Iterative prompting takes a more dynamic approach, with multiple rounds of interaction that enable users to more fully develop an idea over time.
The Socratic method is a method of inquiry where a teacher or facilitator asks a series of open-ended questions to guide a student or participant to discover their own understanding of a topic, often by challenging assumptions and exploring underlying beliefs, rather than providing direct answers; it is named after the ancient Greek philosopher Socrates, who famously used this approach in his dialogues to stimulate critical thinking.
This can be a great way to help you start thinking about a new topic. To use GenAI as your personal Socrates, include this request in your prompt:
This works well with topics where the answers are based more in facts:![]() |
And it also works well for opinion-based topics:![]() |
As you develop your skills using generative AI, think critically! Here is a worksheet with some activities you might try:
Here are some additional ideas to help you think critically:
Try: Use the strategies described on the "Begin exploration using the journalistic questions" and "Explore deeply" tabs in both the GenAI tool of your choice and in Google or Google Scholar. In GenAI, use a well-formed prompt. In Google, ask the journalistic questions and follow-up questions separately.
Compare the sources and their value in each type of exploration.
Reflect: What are the differences between the quality of the sources obtained in each search type? What are the strengths of each type of search? What are the weaknesses?
Try: Use the strategies described on the "Begin exploration using the journalistic questions" and "Explore deeply" tabs in at least 2-3 different GenAI tools (ex. ChatGPT, Gemini and Perplexity). Provide the same prompt in each tool.
Compare the sources each tool provides and their value.
Reflect: What are the differences between the quality of the sources obtained in each tool? What are the strengths of each tool? What are the weaknesses?