Item Restricted to University of Alberta Users

Log In with CCID to View Item
Usage
  • No view information available
  • No download information available

Decoding the Black Box: Tracing the Inscription of Values in Large Language Models

  • Author(s) / Creator(s)
  • SSHRC IG awarded 2024: Generative artificial intelligence technologies, particularly large language models like ChatGPT, promise to revolutionize how we work, learn, and even govern, and are being positioned as central actors in a rapidly evolving digital society. However, the deployment of such models can lead to problematic outcomes. For instance, biases in the selection andprocessing of training data can perpetuate discriminatory practices; so-called algorithmic hallucinations---erroneous responses that are not supported by data---have resulted in the promulgation of false information; and the opacity of technology practices associated with these large language models have led to a plethora of concerns about the privacy and propriety of individual and organizational data. These problematic outcomes uncover misalignment between what generative AI technologies can do and our human values. We posit that there is an urgent need for scholars to develop an understanding of how values become embedded in generative AI technologies, leading to our core research question: How do organizations inscribe values into generative AI technologies? To unpack these complex issues, we zoom in on three inter-related questions: How do organizations articulate the values they aim to embed in generative AI technologies? How are training data selected and processed? How are algorithmic models "tuned" to reflect human values? Our research team, guided by scholars with expertise in technology and culture, will employ qualitative research methods to investigate these pressing issues. Importantly, we plan to use new and existing connections with key stakeholders in Canada's artificial intelligence community (e.g., Alberta Machine Intelligence Institute, Vector Institute for Artificial Intelligence, The Responsible Artificial Intelligence Institute) who are committed to participating in our investigation, thereby ensuring its feasibility and impact. We plan to conduct two focused inquiries. First, we will leverage our network to develop and synthesize a field-level study that traces the mechanisms whereby industry stakeholders inscribe values into data and algorithms. This will allow us to identify the state of current practice as well as understand "best" practices. Second, we are currently finalizing partnerships with an organization in the financial services sector to conduct an ethnographic study that will closely examine how they inscribe values into their AI technologies.

  • Date created
    2023-09-29
  • Subjects / Keywords
  • Type of Item
    Research Material
  • DOI
    https://doi.org/10.7939/r3-k390-ap06
  • License
    ©️Glaser, Vern. All rights reserved other than by permission. This document embargoed to those without UAlberta CCID until 2030.