Skip to Main Content

AI and the Library

Bias

 

Concerns over semantic searching

As semantic searching is not transparent, we lose control over the information that is generated. The move from exploratory searching with our own search strategies to those made by preselected cues from AI tools might lead to unconscious bias (cherry picking). See citation network tools for more information.

 

Harmful data

Data is not neutral. The way in which the tools' training data was collected, labelled and then trained could be biased. As this data originates from human-generated content on the internet, it could include and replicate misinformation, disinformation, biased information and negative stereotypes.

 

Lack of multiple perspectives

The training data may depend on Western-centric and English-language sources, therefore it is less likely to present multiple perspectives from around the world.

 

Originality

Large Language Models (LLMs) generate responses based on patterns in large datasets, which means their outputs reflect mainstream and commonly accepted ideas. While they can help spark initial ideas, originality comes from engaging with diverse perspectives and conducting your own research.  To be more creative and to think ‘outside the box’ read from a wide range of sources. 

 

Transparency issues

The company providing the GenAI tool might not be transparent about its commercial or political interests, the sources of its training data, their policies on how the training data is fine-tuned or the inner workings of its algorithims, which raises concerns about potential bias and accountability.

 

Reflection

Consider the following questions to help you reflect on bias and your use of AI tools:

  • How might you use effective prompting to mitigate bias in the AI output and in yourself?
  • Why is there a risk that AI tools produce results that are biased?
  • Where might bias in AI come from?
  • What are the risks of using AI outputs that contain bias?
  • What effect does the AI output have on my own bias?
  • Who does AI represent?

 

References

Awati, R. and Yasar, K. (2024) What is black box AI? Available at: https://www.techtarget.com/whatis/definition/black-box-AI (Accessed: 30 April 2025).

Noble, S. (2018) Algorithms of oppression: how search engines reinforce racism. New York: New York University Press. [link for UoB access]​

TED. (2017) Joy Buolamwini: how I'm fighting bias in algorithms, 29 March, Available at: https://www.youtube.com/watch?v=UG_X_7g63rY (Accessed: 30 April 2025).

Williams, A., Miceli, M. and Gebru, T. (2022) The Exploited Labor Behind Artificial Intelligence. Available at: https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence(Accessed: 30 April 2025).