Site Loader

Libraries have historically been places, either physical or digital, where people go to seek knowledge, learn skills, and thoughtfully engage with others’ ideas– where people go to grow their “human intelligence.” The vast accumulation of information about people’s lives and behaviors has become the foundation for building a different kind of understanding about the world and the human condition. “Artificial intelligence” or AI involves the processing and analyzing of billions of data points to identify patterns, make predictions, and often drive decisions that affect people’s lives. As Stephen Straus, Co-founder and Managing Director of KungFu.AI noted during the IDEA Institute invited experts dinner, all technologies can be used to do harm or to accomplish good. As the capabilities of AI present opportunities for libraries to enhance their services, librarians must stand poised to lead intentional discourse around the planning and implementation of AI-enabled processes toward the goals of maximum benefit and minimal harm for all who may be impacted.

“Human in the loop” is a phrase that is often heard in discussions of AI processes. At its most basic level, the phrase refers to the need for a person to check AI outputs or decisions to ensure accuracy, “fairness” and equity. Min Kyung Lee, Assistant Professor in the School of Information at the University of Texas, Austin, extends the notion of humans being essential to AI-powered systems by engaging people in all stages of AI project development. In doing so, she moves people beyond just being “in the loop” to an empowered position of centrality. Her human-centered design projects, such as the development of an AI process to improve equitable distribution of donated food and the WeBuildAI Framework can serve as models for how librarians can improve library users’ and other community stakeholders’ understandings of AI and involve them in decision-making about AI design and implementation.

As librarians plan AI-enabled work, we must be mindful of the humans who may be obscured when we break down, disassociate, label and annotate data. Such processes are often involved in digital humanities and other collections-as-data projects. As Claudia Engel and Nicole Coleman from Stanford University Libraries reminded the IDEA Institute Fellows, asking questions such as: Who owns the data? How and where was the data collected? Who annotated the data? Is the data complete? Are there gaps or inconsistencies in the data? Who is represented in and who is left out of the data? What happens to the integrity of the data when it undergoes multiple levels of interpretation? Answers to all these questions will have an impact on the stories the data can tell and the authenticity of those stories.

Human intelligence, human in the loop, human-centered design, and remembering the humans behind the data: As AI capabilities increase in the future, new discoveries, questions, and tensions are likely to emerge. Librarians should strive to build our knowledge so that we can both apply the technical skills and raise the ethical concerns. In doing so, we can mindfully educate and empower our users and community toward the implementation of AI for the “human good.”

–Photo by ThisisEngineering RAEng on Unsplash

Post Author: Kimberly S. Grotewold