Translate

Search This Blog

Wednesday, November 27, 2024

The Ethics of AI: Navigating the Three Cs of Generative AI

Closing Keynote: The Three Cs of Generative AI in Libraries

Presented by Reed Hepler at the AI and the Libraries 2 Mini Conference



In the closing keynote of the "AI and the Libraries 2 Mini Conference: More Applications, Implications, and Possibilities," Reed Hepler, Digital Initiatives Librarian and Archivist at the College of Southern Idaho, shared valuable insights on the use of generative AI in educational and library settings. With experience spanning educational formats, library environments, and business training, Hepler delved into the ethical considerations and best practices surrounding generative AI tools.

Introduction

Hepler began by acknowledging the diverse perspectives educators and administrators hold regarding generative AI. He identified four primary viewpoints observed at his institution:

  1. Fear that student use of ChatGPT and similar tools creates new forms of unethical practices.
  2. Confidence that students wish to use ChatGPT effectively and constructively.
  3. Concern that generative AI undermines established systems and norms of online learning.
  4. Belief that ChatGPT can lead to innovative products and workflows enhancing instructional design and assessment.

Recognizing the need to address these concerns, Hepler introduced a framework he developed to guide ethical and effective use of generative AI: the "Three Cs."

The Three Cs of Generative AI

1. Copyright

Key Question: Who owns the rights to AI-generated products, and how are they created?

Hepler discussed the complexities of copyright in the context of generative AI, posing three critical questions:

  • What are the rights and responsibilities of the original creators whose works are used by AI?
  • What are the rights and responsibilities of users who employ AI tools?
  • Is generative AI an owner, a user, both, or neither in terms of copyright?

He clarified that copyright protects the expression of ideas in any medium and grants exclusive rights to the creator or copyright holder. However, devices, processes, ideas, public domain materials, works by government employees, and recipes cannot be copyrighted.

Hepler emphasized that the current copyright law requires human authorship for protection, raising questions about whether AI can be considered an author. He also highlighted the ongoing debates and legal challenges surrounding the fair use doctrine as it pertains to AI training on copyrighted materials.

He cited examples of copyright battles involving AI-generated works, such as "Zarya of the Dawn," and discussed the implications of using copyrighted content in AI prompts. He stressed the importance of respecting intellectual property rights and advised users to avoid inputting copyrighted material into AI tools unless they own the rights.

2. Citation

Key Question: How should AI tools and outputs be cited, and where did the information originate?

Noting the absence of standardized citation formats for AI-generated content, Hepler emphasized that the purpose of citation is to provide information about sources. He recommended including the following elements in any AI citation:

  • Tool name and version
  • Date and time of usage
  • Prompt, query, or conversation title
  • Name of the person who queried the AI
  • Link to the conversation or output, if possible

He provided an example of how to cite AI-generated content in APA style, suggesting that users include their name to acknowledge their role in the creation process. He stressed that users should engage in the editing and revision of AI outputs to ensure originality and accuracy.

3. Circumspection

Key Question: What hazards—moral, ethical, educational, or otherwise—should users manage when utilizing generative AI tools?

Hepler outlined several ethical issues associated with AI outputs, including:

  • Plagiarism
  • Biases
  • Repetitiveness and arbitrariness
  • Incorrect or misleading information
  • Lack of connection to external resources

He discussed privacy concerns, highlighting how AI tools can extrapolate personal data from user inputs, even when users attempt to minimize the information they provide. He emphasized that users should never input sensitive or confidential information into AI prompts.

Hepler recommended several practices to mitigate these risks:

  • Informing users about data collection and its purposes
  • Obtaining explicit consent for data usage
  • Limiting data collection to essential information (data minimization)
  • Implementing strict access and use controls
  • Anonymizing data in prompts

He also discussed the importance of quality control when using AI-generated content, advising users to:

  • Use AI tools for their intended purposes
  • Engage in best practices for prompting
  • Ask the AI for its sources and verify them
  • Find external resources to support AI-generated information
  • Analyze outputs for ethical issues, accessibility, and accuracy

Privacy and Ethical Considerations

Hepler delved deeper into privacy harms associated with AI, referencing works by legal scholars such as Danielle Keats Citron and Daniel J. Solove. He noted that privacy laws often require proof of harm, which can be difficult when dealing with intangible injuries like anxiety or frustration resulting from data breaches or misuse.

He highlighted that AI tools like ChatGPT have specific terms of use that assign users ownership of the outputs generated from their inputs. However, users are responsible for ensuring that their content does not violate any applicable laws.

Hepler stressed that despite best efforts, AI tools can still extrapolate personal data, underscoring the importance of being cautious with the information provided to these systems.

Conclusion and Recommendations

Concluding his keynote, Hepler provided a list of references and resources for further exploration of the topics discussed. He reiterated the need for libraries and educators to navigate the evolving landscape of generative AI thoughtfully, balancing innovation with ethical considerations.

He encouraged attendees to remain informed about developments in AI and copyright law, to respect intellectual property rights, and to engage in responsible use of AI tools. By adhering to the "Three Cs" framework—Copyright, Citation, and Circumspection—users can harness the benefits of generative AI while mitigating potential risks.

Final Thoughts

Hepler's presentation offered a comprehensive overview of the challenges and responsibilities associated with generative AI in libraries and education. His insights serve as a valuable guide for professionals seeking to integrate AI tools into their work ethically and effectively.

Note: This summary is based on the closing keynote delivered by Reed Hepler at the AI and the Libraries 2 Mini Conference.

The Real-World Harms of AI in Healthcare: A Closer Look

Ethical Considerations for Generative AI Now and in the Future

Presented by Dr. Kellie Owens, Assistant Professor in the Division of Medical Ethics at NYU Grossman School of Medicine



Dr. Kellie Owens delivered an insightful presentation on the ethical considerations surrounding generative AI, particularly relevant to medical librarians and professionals involved in data services. As a medical sociologist and empirical bioethicist, Dr. Owens focuses on the social and ethical implications of health information technologies, including the infrastructure required to support artificial intelligence (AI) and machine learning in healthcare.

Introduction

Dr. Owens began by situating herself within the broader discourse on AI ethics, acknowledging the prevalent narratives of both awe and panic that often dominate news coverage. She highlighted a split within the field between AI safety—which focuses on existential risks and future catastrophic events—and AI ethics, which concentrates on addressing current, tangible ethical concerns associated with AI technologies.

Referencing the "Pause Letter" signed by prominent figures like Yoshua Bengio and Elon Musk, which called for a six-month halt on training AI systems more powerful than GPT-4, Dr. Owens expressed skepticism about such approaches. She argued that while managing existential risks is important, it is crucial to focus on the real and already manifesting ethical issues that AI poses today.

Real-World Harms of AI in Healthcare

Dr. Owens provided examples of harms caused by AI tools in healthcare, emphasizing that these issues are not hypothetical but are currently affecting patients and providers. She cited instances where algorithms reduced the number of Black patients eligible for high-risk care management programs by more than half and highlighted biases in medical uses of large language models like GPT, which can offer different medical advice based on a patient's race, insurance status, or other demographic factors.

Framework for Ethical Considerations

Building her talk around the five key themes from the Biden administration's Office of Science and Technology Policy's "Blueprint for an AI Bill of Rights," Dr. Owens discussed:

  1. Safe and Effective Systems
  2. Algorithmic Discrimination Protections
  3. Data Privacy and Security
  4. Notice and Explanation
  5. Human Alternatives, Consideration, and Fallback

1. Safe and Effective Systems

Emphasizing the principle of "First, do no harm," Dr. Owens discussed the ethical imperative to ensure that AI tools are both safe and effective. She addressed the issue of AI hallucinations, where large language models generate false or misleading information that appears credible. In healthcare, such errors can have significant consequences.

She also touched on the problem of dataset shift, where AI models decline in performance over time due to changes in technology, populations, or behaviors. Dr. Owens highlighted the need for continuous monitoring and updating of AI systems to maintain their reliability and accuracy.

2. Algorithmic Discrimination Protections

Dr. Owens delved into the ethical concerns related to algorithmic bias and discrimination. She cited studies like "Gender Shades," which revealed that facial recognition technologies performed poorly on women, particularly women with darker skin tones. In the context of generative AI, she discussed how image generation tools can perpetuate stereotypes, such as depicting authoritative roles predominantly as men.

She highlighted instances where AI models like GPT-4 produced clinical vignettes that stereotyped demographic presentations, calling for comprehensive and transparent bias assessments in AI tools used in healthcare.

3. Data Privacy and Security

Addressing data privacy concerns, Dr. Owens discussed vulnerabilities like prompt injection attacks, where attackers manipulate AI models to reveal sensitive training data, including personal information. She emphasized the importance of protecting users from abusive data practices and ensuring that individuals have agency over how their data is used.

She also raised concerns about plagiarism and intellectual property violations, noting that generative AI models can reproduce copyrighted material without attribution, leading to potential legal and ethical issues.

4. Notice and Explanation

Dr. Owens stressed the importance of transparency and autonomy, arguing that users should be informed when they are interacting with AI systems and understand how these systems might affect them. She cited the example of a mental health tech company that used AI-generated responses without informing users, highlighting the ethical implications of such practices.

5. Human Alternatives, Consideration, and Fallback

Finally, Dr. Owens emphasized the necessity of providing human alternatives and the ability for users to opt out of AI systems. She underscored that while AI can offer efficiency, organizations must be prepared to address failures and invest resources to support those affected by them.

Key Takeaways

Dr. Owens concluded with several key insights:

  • Technology is Not Neutral: AI systems are socio-technical constructs influenced by human decisions, goals, and biases. Recognizing this is essential in addressing ethical considerations.
  • Benefits and Costs: It is crucial to weigh both the advantages and potential harms of AI applications, including issues like misinformation, environmental impact, and the perpetuation of biases.
  • What's Missing Matters: Considering the gaps in AI training data and the politics of what's excluded can provide valuable ethical insights.
  • Power Dynamics: Evaluating how AI shifts power structures is important. AI applications should aim to empower marginalized communities rather than exacerbate existing inequalities.

Conclusion

Dr. Owens encouraged ongoing dialogue and critical examination of generative AI's ethical implications. She highlighted the role of professionals like medical librarians in shaping how AI is integrated into systems, emphasizing the need for intentional design, transparency, and a focus on equitable outcomes.

For those interested in further exploration, she recommended reviewing the "Blueprint for an AI Bill of Rights" and engaging with interdisciplinary approaches to AI ethics.

Note: This summary is based on a presentation by Dr. Kellie Owens on the ethical considerations of generative AI, particularly in the context of healthcare and data services.

Navigating the Intersection of AI and Copyright Law in Australia

AI and Copyright Law in Australia: Exploring Options and Challenges

Presentation by an expert on the intersection of AI and Australian copyright law.



Introduction

The speaker delves into the complexities of how Australian copyright law intersects with artificial intelligence (AI), particularly generative AI. The focus is on exploring practical options for Australia to balance AI innovation with the protection of human creators in the creative industries.

Key Premises

  1. Australian Copyright Law is Unique: Australia's legal framework differs significantly from other jurisdictions, impacting how AI and copyright issues are addressed.
  2. Room for Debate: There's flexibility in how international copyright principles apply to AI, allowing Australia to make deliberate choices about its legal stance.
  3. Desirable End State: The goal is to achieve both AI innovation and deployment in Australia, alongside thriving human creators and creative industries.
  4. Practical Realities Matter: Any legal approach must consider Australia's position in the global landscape and the types of AI activities likely to occur within the country.

Generative AI in Australia

The speaker emphasizes that generative AI isn't limited to global platforms like ChatGPT or Midjourney but also includes local applications such as government chatbots and educational tools. These smaller models, often built on larger ones, are integral to various sectors in Australia, including government services and businesses.

Five Options for Addressing AI and Copyright

  1. Strict Copyright Rules (Status Quo):
    • Maintains the current strong interpretation of copyright law.
    • Results in widespread potential infringement by businesses and government entities using AI.
    • Does not lead to compensation for creators due to training occurring overseas or behind closed doors.
    • Considered a "lose-lose" scenario with a chilling effect on AI development and deployment in Australia.
  2. Classic Common Law Compromise:
    • Attempts to balance interests through complex rules and conditional exceptions.
    • Could lead to a prolonged and complicated legal process with little practical benefit.
    • Risks stalling AI innovation due to legal uncertainties.
  3. Equitable Remuneration for Creators:
    • Proposes a remunerated copyright limitation for human creators whose works are used in AI training.
    • Involves collective management organizations and statutory licensing.
    • Faces challenges in valuation, distribution, and practical implementation.
  4. Lump Sum Levy on AI Systems:
    • Suggests imposing a levy on AI systems capable of producing literary and artistic works.
    • Aims to compensate creators for potential substitution effects (displacement of human labor).
    • Not strictly a copyright issue but more akin to models like the News Media Bargaining Code.
  5. Focus on Economic Loss and Market Effects:
    • Allows AI training on copyrighted data but permits rights holders to claim compensation if they can demonstrate economic loss.
    • Acknowledges the difficulty in proving loss and valuing it appropriately.
    • Highlights the complexity of linking copyright infringement to market harm in the AI context.

Challenges and Considerations

The speaker notes that many proposed solutions have significant drawbacks, particularly in terms of practicality and potential negative impact on AI innovation in Australia. Attempts to create a balanced compromise may result in prolonged legal battles and complex regulations that fail to satisfy any stakeholders fully.

Recommended Path Forward

The speaker suggests a pragmatic approach:

  • Address Mundane but Impactful Issues: Focus on areas where immediate improvements can be made, such as text and data mining exceptions, especially for sectors outside the core creative industries.
  • Reform Liability at the Deployment Stage: Modify laws to ensure that Australian firms using AI, particularly those adopting reasonable copyright safety measures, are not unduly liable for potential infringements.
  • Consider Non-Copyright Solutions for Creator Compensation: Explore mechanisms outside of copyright law, such as levies or funds, to address the displacement effects on human creators.
  • Implement Technical Copying Exceptions: Introduce exceptions that allow for necessary technical copying during AI training and deployment without infringing copyright.

Conclusion

The speaker concludes that while the intersection of AI and copyright law presents complex challenges, a practical and focused approach can help Australia navigate these issues effectively. By addressing specific areas where legal adjustments can facilitate AI innovation while minimizing harm to creators, Australia can work towards a more balanced and forward-looking legal framework.

Questions and Discussion

The presentation ends with an invitation for questions and further discussion on the topic, emphasizing the need for ongoing dialogue to refine and implement effective solutions.

Note: This summary is based on a presentation discussing the challenges and options for addressing AI and copyright law in Australia.