Skip to Main Content
QUL logo

Artificial Intelligence

Artifical Intelligence and the research process

Bias in AI Design

Bias can be introduced to AI models and technologies at multiple points, including during their "commission, design, development, and deployment" (NIST, 2022, v).​

Bias can be introduced unintentionally, and can come from various sources, including: ​

  • Bias in statistical and computational processes​

    • happens in the datasets and algorithms used in AI development​

  • Human bias​

    • introduced by the people developing, training and testing AI through things like data selection for AI training, data weighting, etc.​

  • Systemic bias​

    • happens at multiple levels, including institutional, group, and individual level, introducing bias through the datasets, decision-making, planning, practices, and procedures​

    • Some common examples of systemic biases which impact AI include racism, sexism, and ableism​

 An iceberg is shown, labeled with technical biases above the water's surface and with human biases and systemic biases underwater.

Hanacek, N. (n.d.) AI bias iceberg [image]. National Institute of Standards and Technology. https://www.nist.gov/image/ai-bias-iceberg

Increased diversity in the teams that develop, train, test, and deploy AI models can help mitigate these biases.​

Biases are involved in the decision making and reasoning for developing AI models, and what the purpose of those models is, the application of the models, and how they are deployed.​

  • The data that is used to train AI can introduce bias. ​

    • This training data is frequently collected from the internet without any assessment for its accuracy, quality, representability, or neutrality. ​

    • As a result, the existing inherent, unconscious and conscious biases, systemic racism, stereotypes, and misinformation from that data is reproduced and perpetuated by AI models. ​

    • Lack of diversity in training datasets also contributes to the under and over-representation of groups​

      • For example: facial recognition algorithms trained on a dataset that over-represent white and male-presenting faces are likely to create errors when analysing faces of people of colour and female-presenting faces.​

      • Another example: Image banks used to train AI for image generation can reproduce racial and gendered stereotypes associated with gender, race, and professions, i.e. Images generated of entrepreneurs resulting in images of exclusively white men while images generated of nurses result in images of women

Related Readings

Hanacek, N. (n.d.). AI bias iceberg  [image].  National Institute of Standards and Technology. https://www.nist.gov/image/ai-bias-iceberg

Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Figure 5. How biases contribute to harms. Towards a standard for identifying and managing bias in artificial intelligence. National Institute of Standards and Technology (U.S.), p. 27 https://doi.org/10.6028/NIST.SP.1270