Frameworks such as RADAR, ACORN, SIFT, or CRAAP can help you reflect on your use of AI and critically evaluate its outputs. The CRAAP model, originally designed by librarian Sarah Blakeslee at the Meriam Library California State University Chico, provides a list of questions to evaluate the information that you find. We have adapted the CRAAP model to guide your evaluation of AI outputs and tools. To print off or work with your own version of the CRAAP model for AI, please download our PDF here: The critically evaluating AI CRAAP model.
Currency
- Outputs may not cite the most recent sources if the tool is working from an older dataset
- Even if its dataset is current, the tool could generate references which are out of date
Relevance
- Is AI the right tool for this specific task?
- Results are not comprehensive - they exclude a large amount of scholarly research behind paywalls, as they can only access the citation network, abstract data and open access articles
- GenAI tools cannot create a comprehensive, replicable search strategy required for a systematic review of the literature and a large number of relevant papers may be missed
Authority
- It might be difficult to evaluate who is the author, what expertise or authority do they have, or how prominent they are in their field
- Results are less likely to present multiple perspectives from around the world as the training data may depend on Western-centric and English-language sources
Accuracy
- Accuracy of their generated summaries and references can vary
- Potential for hallucinations, for example fabricated references
- Genuine citations might not align with the arguments of their summaries
- Information which they generate should always be verified
Purpose
- There are ethical implications for using AI, such as maintaining your academic integrity and concerns over equity, data privacy and environmental impact
- The company providing the GenAI tool might not be transparent about its commercial interests, policies over training data or how its algorithms work, prompting concerns over bias
- Bias in semantic searching - as this technique is not transparent, we lose control over the information that is generated. The move from exploratory searching with our own search strategies to those made by preselected cues from AI tools might lead to unconscious bias (cherry picking). See citation network tools for more information.