Translate

Search This Blog

Thursday, November 28, 2024

The Intersection of Artificial Intelligence and Structural Racism: Understanding the Connection

Understanding Structural Racism in AI Systems

Presented by Craig Watkins, Visiting Professor at MIT and Professor at the University of Texas at Austin


Introduction

Craig Watkins discusses the intersection of artificial intelligence (AI) and structural racism, emphasizing the critical need to address systemic inequalities in the development and deployment of AI technologies. He highlights initiatives at MIT and the University of Texas at Austin aimed at fostering interdisciplinary approaches to create fair and equitable AI systems that have real-world positive impacts.

Key Points

The Impact of AI on Marginalized Communities

  • Instances where facial recognition software has falsely identified Black men, leading to wrongful arrests.
  • These cases underscore the potential of AI to replicate systemic forms of inequality if not carefully designed and monitored.

Challenges of Defining Fairness in AI

  • Machine learning practitioners have developed over 20 different definitions of fairness, highlighting its complexity.
  • Debate over whether AI models should be aware of race to prevent implicit biases or unaware to avoid explicit discrimination.
  • Fair algorithms may not address deeply embedded structural inequalities if they assume equal starting points for all individuals.

Understanding Structural Racism

  • Structural racism refers to systemic inequalities embedded within societal institutions and systems.
  • It manifests in interconnected disparities across various domains, such as housing, credit markets, education, and health.
  • These disparities are often less visible and more challenging to address than interpersonal racism.

Case Study: Housing and Credit Markets

  • Homeownership is a primary pathway to wealth accumulation and access to quality education, health care, and social networks.
  • Discriminatory practices in credit markets have historically limited access to homeownership for marginalized groups.
  • AI-driven financial services aiming to address biases may inadvertently introduce data surveillance and privacy concerns.

Interconnected Systems of Inequality

  • Disparities in one system (e.g., credit markets) are linked to disparities in others (e.g., housing, education).
  • Addressing structural racism requires understanding and tackling these interconnected systems holistically.
  • Designing AI models that account for this complexity is a significant computational and ethical challenge.

The Role of Education and Interdisciplinary Collaboration

  • Emphasizes the importance of training both AI developers and users to recognize and mitigate biases.
  • Advocates for interdisciplinary approaches combining technical expertise with social science insights.
  • Highlights initiatives at MIT and UT Austin focused on integrating these perspectives into AI research and education.

Conclusion

Craig Watkins calls for the development of AI systems that not only avoid perpetuating systemic inequalities but actively work to dismantle them. He stresses the need for educating the next generation of AI practitioners and users to make ethical, responsible decisions, and to be aware of the societal impact of their work.

Key Quote

Referencing Robert Williams, a man wrongly arrested due to faulty facial recognition software:

"This obviously isn’t me. Why am I here?"

The police responded, "Well, it looks like the computer got it wrong."

This exchange underscores the profound consequences of unchecked AI systems and the urgent need for responsible design and implementation.

Demystifying Ethical AI: Understanding the Jargon and Principles

The Many Flavors of AI: Terms, Jargon, and Definitions You Need to Know


Introduction

The presenter discusses the rapidly evolving landscape of artificial intelligence (AI), particularly focusing on the terminology and jargon associated with ethical AI. Using an ice cream analogy, the presentation aims to help librarians and information professionals understand and keep up with various AI concepts to better assist their patrons, colleagues, and stakeholders.

Importance for Librarians

  • Librarians have a foundational responsibility to understand AI tools and systems.
  • AI is not just filtering information but also creating it, affecting how information is accessed and used.
  • Similar to past technological shifts (e.g., Google, Wikipedia), AI is a bellwether of change in information science.
  • Librarians need to lead the charge in ethical AI usage and education.

The Ice Cream Analogy of Ethical AI

The presenter uses different ice cream flavors to represent various terms related to ethical AI:

1. Ethical AI (Vanilla)

  • Principles and values guiding the development, deployment, and use of AI systems.
  • Focuses on fairness, accountability, and transparency.
  • Ensures AI aligns with societal values and ethical principles.

2. Responsible AI (Chocolate)

  • Actions and practices organizations should take to ensure AI is developed and used responsibly.
  • Includes risk management, stakeholder engagement, and governance.
  • Emphasizes organizational norms and the practical implementation of ethical standards.

3. Transparent AI (Strawberry)

  • AI systems where the inner workings are visible (glass-box vs. black-box systems).
  • Transparency in development processes and usage purposes.
  • Not necessarily explainable; complexity can still hinder understanding.

4. Explainable AI (Pistachio)

  • AI systems whose operations can be understood and explained to users.
  • Not always transparent; proprietary systems may be explainable without revealing inner workings.
  • Important for building trust and accountability.

5. Accessible AI (Peach)

  • AI systems that are usable by a wide range of people, including those with disabilities.
  • Focus on inclusivity in design and implementation.
  • Examples include AI with spoken captions or image descriptions.

6. Open AI (Not to be Confused with OpenAI)

  • AI systems with open-source code, open development environments, and accessible documentation.
  • Emphasizes transparency and community involvement.
  • Being open doesn't necessarily mean being ethical or responsible.

7. Trustworthy AI (Blueberry)

  • AI systems that are reliable and operate as intended.
  • Trustworthiness depends on who is assessing it and for what purpose.
  • Often paired with transparency and explainability but not guaranteed.

8. Consistent AI (Lemon Sherbet)

  • AI systems that operate reliably and produce consistent results.
  • Consistency does not imply trustworthiness or ethical behavior.
  • Consistent AI may consistently exhibit biases or other issues.

Key Takeaways

  • Terminology around AI can be misleading; terms like "transparent," "open," or "trustworthy" are not guarantees of ethical behavior.
  • Librarians should critically evaluate AI systems beyond surface-level labels.
  • Understanding these distinctions helps librarians guide users in the proper use of AI tools.

Recommendations for Staying Informed

The presenter suggests following key figures in the field of ethical AI to stay updated:

  • Timnit Gebru: Former Google researcher specializing in AI ethics.
  • Abhishek Gupta
  • Carey Miller
  • Reid Blackman: Hosts a podcast series on AI ethics and responsibility.
  • Laura Mueller
  • Ryan Carrier
  • Kurt Cagle
  • Norman Mooradian: Professor at San Jose State University with extensive research on ethical AI.

Q&A Highlights

During the question and answer session, the following points were discussed:

1. Importance of AI Literacy

  • Librarians should educate themselves and patrons about AI tools.
  • AI literacy includes understanding the limitations and proper uses of AI.

2. Transparency and Open AI

  • OpenAI's transparency has been questioned; "open" does not always mean fully transparent.
  • Critical evaluation of AI companies and their claims is necessary.

3. Explaining AI to Users

  • AI systems like ChatGPT predict language based on training data, which includes a vast range of internet content.
  • Librarians should guide users on when and how to use AI tools appropriately.

4. Understanding Algorithms

  • An algorithm is the foundational code that dictates how an AI system operates.
  • Algorithms can embed biases based on how they process training data.

5. FAIR AI

  • FAIR stands for Findable, Accessible, Interoperable, and Reusable.
  • Applying FAIR principles to AI and machine learning is an emerging area.

6. Hallucinations in AI

  • AI hallucinations occur when AI systems generate incorrect or fabricated information.
  • Important for librarians to educate users about verifying AI-generated content.

Conclusion

The presenter emphasizes that AI is a tool—neither inherently good nor bad—and it's crucial for librarians to stay informed and lead in ethical AI practices. By understanding the nuances of AI terminology and concepts, librarians can better assist users and influence responsible AI development and use.

Revolutionizing Library UX: Using AI to Enhance Website Usability

Improving Library Website Usability with AI

Presented by Elisa Saphier, Librarian at Central Connecticut State University (CCSU)



Introduction

Elisa discusses how librarians can leverage artificial intelligence (AI) to enhance the usability of library websites. She shares her personal experiments and insights using AI tools, particularly generative AI models like ChatGPT and Google's Gemini, to support various aspects of website usability and user experience (UX) design.

Context and Motivation

  • Elisa has extensive experience as a technologist, systems librarian, and web librarian.
  • She is co-teaching an introductory course on research with AI, focusing on information literacy.
  • Her goal is to gain practical experience with AI to understand its capabilities and limitations in improving library website usability.

Challenges with AI

  • Lack of substantial literature on using AI for library website usability improvements.
  • Common issues with AI include biases, hallucinations, ethical concerns, intellectual property rights, privacy, and environmental impacts.
  • Emphasizes the importance of a "trust but verify" approach when using AI tools.

Applications of AI in Library Website Usability

AI Chatbots

  • Discussed the potential and challenges of integrating AI-powered chatbots in libraries.
  • Noted that chatbots have been considered in libraries for years but require careful implementation.
  • Encouraged sharing experiences with AI chatbots like Springshare's LibChat or Google's Dialogflow.

Data Collection and Analysis

  • Stressed the need for collecting user data through surveys, interviews, and usage statistics to inform website improvements.
  • Mentioned the System Usability Scale (SUS) as a tool for evaluating user reactions to websites.

User Personas and Stories

  • Used ChatGPT to generate user personas for CCSU's library website redesign.
  • Identified biases and stereotypes in AI-generated personas, such as lack of diversity and reinforcing stereotypes.
  • Highlighted the importance of involving community members to ensure accurate and respectful representations.

Customer/User Journey Mapping

  • Explored how AI can assist in creating user journey maps to understand user interactions with the library website.
  • Used AI to identify phases where users might disengage and to develop strategies to enhance user engagement.

Usability Testing

  • Suggested using AI to generate sample tasks for usability testing of the library website.
  • Referenced a compiled Google Sheet of usability tasks used by various libraries as a resource.

Analyzing User Feedback

  • Employed tools like Whisper AI to transcribe and analyze audio and video feedback from users.
  • Used AI to summarize key points and extract actionable insights from user feedback.

Improving Navigation and Information Architecture

  • Attempted to use AI for creating site maps and evaluating the website's navigation structure.
  • Faced challenges with AI not providing accurate or high-quality outputs when analyzing their specific website.
  • Described difficulties in using AI to parse HTML code for card sorting exercises, encountering limitations in AI's understanding.

Design Inspiration

  • Used AI to identify exemplary academic library websites (e.g., MIT, Stanford, Michigan, Harvard, Oxford) for inspiration.
  • Considered analyzing these websites' navigation and terminology to adopt best practices.

Code Improvements

  • Utilized AI to improve website code, such as replacing "click here" links with more accessible and descriptive text.
  • Faced challenges with AI in generating code that met specific requirements, requiring multiple iterations and clarifications.

Usage Data Analysis

  • Explored using AI to define user conversion funnels and metrics in Google Analytics.
  • Aimed to understand user paths, engagement levels, and points where users drop off.

Reflections on AI

  • Noted that AI can provide generic or inaccurate suggestions not tailored to specific contexts.
  • Described AI as "weird" due to its unpredictable behavior and occasional misalignment with user intentions.
  • Emphasized the necessity for librarians to engage with AI critically, given its growing influence on the information ecosystem.

Conclusion

Elisa invites fellow librarians and colleagues to share their experiences and collaborate in exploring AI's potential in enhancing library services. She underscores the importance of continuous learning and adaptation in the rapidly evolving landscape of AI technologies.