AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

TED
6 Nov 202310:19

TLDRAn AI researcher discusses the current and tangible impacts of AI, such as its contribution to climate change, the unauthorized use of art and literature in training data, and the biases embedded in AI models. They highlight the importance of transparency, ethical considerations, and the development of tools like CodeCarbon and the Stable Bias Explorer to measure and mitigate these impacts. The speaker emphasizes that while AI's future risks are a concern, immediate actions to address current issues are crucial for shaping a sustainable and equitable AI future.

Takeaways

  • 🤖 AI's societal impact is significant and not just a future concern; it's affecting people and the planet now.
  • 🌍 AI models contribute to climate change through their energy-intensive training and deployment processes.
  • 🔍 The training of large language models like Bloom and GPT-3 has substantial carbon footprints, often unaccounted for by tech companies.
  • 🌐 The trend of 'bigger is better' in AI leads to larger models with higher environmental costs.
  • 🛠️ Tools like CodeCarbon can help estimate the energy consumption and carbon emissions of AI models, aiding in more sustainable choices.
  • 🎨 AI's use of art and literature in training data raises ethical and legal issues regarding consent and copyright infringement.
  • 🔍 'Have I Been Trained?' is a tool that allows creators to check if their work has been used in AI training without permission.
  • 🚨 AI models can encode and perpetuate biases, leading to unfair outcomes in various applications, including law enforcement.
  • 🔍 The Stable Bias Explorer is a tool that helps visualize and understand the biases in image generation models.
  • 🌐 AI's integration into everyday life means it's crucial to ensure accessibility, understanding, and trustworthiness of these systems.
  • 📈 By measuring and addressing AI's current impacts, we can create guardrails to protect society and the planet, rather than just focusing on potential future risks.

Q & A

  • What was the main concern expressed by the stranger in the email to the AI researcher?

    -The stranger claimed that the AI researcher's work in AI would end humanity.

  • How does the AI researcher view the current focus on AI's potential future risks?

    -The researcher believes that focusing on future existential risks is a distraction from the current, tangible impacts of AI and the urgent work needed to address those impacts.

  • What are some of the immediate negative impacts of AI mentioned in the script?

    -The immediate negative impacts include AI models contributing to climate change, using training data without consent, and deployment that can discriminate against communities.

  • What was the initiative that the AI researcher was part of last year?

    -The researcher was part of the BigScience initiative, which aimed to create Bloom, an open large language model with an emphasis on ethics, transparency, and consent.

  • How much energy and carbon dioxide was used in training Bloom, according to the study led by the researcher?

    -Training Bloom used as much energy as 30 homes in a year and emitted 25 tons of carbon dioxide.

  • What tool did the researcher help create to estimate AI training's energy consumption and carbon emissions?

    -The researcher helped create CodeCarbon, a tool that runs in parallel to AI training code to estimate energy consumption and carbon emissions.

  • What is the purpose of the 'Have I Been Trained?' tool created by Spawning.ai?

    -The 'Have I Been Trained?' tool allows users to search massive datasets to see if their work, such as images or text, has been used for training AI models without their consent.

  • How does the Stable Bias Explorer tool work?

    -The Stable Bias Explorer lets users explore the bias of image generation models by examining the representation of different professions and demographics.

  • What is the significance of the AI researcher's work in creating tools for understanding AI?

    -The researcher's work in creating tools helps make AI more accessible and understandable, allowing for better-informed choices, the development of regulations, and the creation of guardrails to protect society and the planet.

  • What was the AI researcher's response to the email about AI destroying humanity?

    -The researcher responded by emphasizing the importance of addressing AI's current impacts and the work that needs to be done to reduce those impacts, rather than focusing on potential future risks.

Outlines

00:00

🤖 AI's Societal Impacts and Sustainability

The speaker, an AI researcher, discusses the societal impacts of AI, including its contribution to climate change, the use of art and literature without consent, and the potential for discrimination. They emphasize the need for transparency and tools to understand AI better. The speaker's involvement in the BigScience initiative and the creation of Bloom, an ethical AI model, is highlighted, along with the environmental costs of training such models. The speaker introduces CodeCarbon, a tool for estimating energy consumption and carbon emissions during AI training.

05:01

🎨 Artistic and Authorial Rights in AI Training

The speaker addresses the challenges faced by artists and authors in proving the unauthorized use of their work in AI training. They mention the tool 'Have I Been Trained?' by Spawning.ai, which allows individuals to search for their content in AI datasets. The speaker's personal experience with this tool and its implications for copyright infringement lawsuits is shared. The partnership between Spawning.ai and Hugging Face to create opt-in and opt-out mechanisms for data collection is also discussed.

10:03

🚨 Addressing AI Bias and Misrepresentation

The speaker delves into the issue of AI bias, explaining how AI models can encode stereotypes and lead to discrimination. They share the story of Dr. Joy Buolamwini's experience with facial recognition systems and the resulting implications for law enforcement. The speaker introduces the Stable Bias Explorer, a tool for examining the biases in image generation models. The representation of professions in AI models is highlighted, showing a significant bias towards whiteness and masculinity. The speaker calls for the use of tools to understand and mitigate these biases.

🛣️ Building AI's Future Responsibly

The speaker concludes by emphasizing that AI's rapid development is not set in stone and that society can collectively shape its direction. They stress the importance of focusing on current tangible impacts of AI rather than potential future risks. The speaker's response to the email about AI's potential to destroy humanity is shared, advocating for immediate action to reduce AI's negative impacts.

Mindmap

Keywords

💡AI Researcher

An individual who studies and develops artificial intelligence systems. In the context of the video, the speaker is an AI researcher who focuses on the societal impacts of AI, emphasizing the importance of understanding and mitigating the current effects of AI rather than solely focusing on potential future risks.

💡Existential Risk

A risk that threatens the entire future of humanity, such as AI becoming uncontrollable. The video discusses how discussions about existential risks can distract from immediate issues related to AI, like environmental impact and bias.

💡Sustainability

The ability to maintain conditions and processes over time without depleting resources or causing environmental damage. The speaker highlights the carbon footprint of AI models, emphasizing the need for sustainable AI practices to reduce energy consumption and emissions.

💡Bloom

An open large language model developed with an emphasis on ethics, transparency, and consent. It serves as an example in the video of how AI can be designed with consideration for its societal and environmental impacts.

💡CodeCarbon

A tool created to estimate the energy consumption and carbon emissions of AI training processes. It exemplifies the speaker's point about the need for transparency and tools to measure and mitigate the environmental impact of AI.

💡Copyright Infringement

The unauthorized use of copyrighted material. The video discusses how AI training can involve the use of art and literature without the creators' consent, leading to copyright infringement lawsuits.

💡Bias

In AI, bias refers to the encoding of stereotypes or prejudices into AI models, leading to unfair or discriminatory outcomes. The speaker addresses the issue of bias in AI, particularly in facial recognition systems, and its societal consequences.

💡Stable Bias Explorer

A tool created by the speaker to explore the biases in image generation models, particularly in terms of professions. It illustrates the pervasive representation of whiteness and masculinity in AI models and serves as an example of how tools can help understand and address AI bias.

💡Transparency

Openness and communication about the processes and impacts of AI. The speaker advocates for transparency in AI development and deployment to ensure that the technology is trustworthy and respects ethical considerations.

💡Guardrails

Measures or rules designed to prevent certain outcomes or behaviors. In the context of AI, guardrails refer to the creation of regulations and governance mechanisms to protect society and the environment from the negative impacts of AI.

Highlights

AI researcher received an email claiming their work could end humanity.

AI is frequently in the headlines, sometimes for positive reasons like medical discoveries, but also for negative incidents like chatbots giving harmful advice.

There's a lot of discussion about AI's potential doomsday scenarios, but the speaker focuses on current issues AI is causing.

AI doesn't exist in a vacuum; it impacts society and the planet, contributing to climate change and potentially discriminating against communities.

AI models' training data can use art and literature without the creators' consent.

The speaker's research found that training AI models like Bloom has a significant environmental cost, comparable to the energy use of 30 homes in a year.

Large language models like GPT-3 emit 20 times more carbon than Bloom.

AI models have been growing in size, leading to increased environmental costs.

The speaker helped create CodeCarbon, a tool to estimate AI training's energy consumption and carbon emissions.

Spawning.ai's 'Have I Been Trained?' tool allows creators to search AI datasets for their work used without consent.

Artists are using this tool to file lawsuits against AI companies for copyright infringement.

AI models can encode and perpetuate stereotypes, leading to biased outcomes.

The speaker created the Stable Bias Explorer to understand and visualize the biases in image generation models.

AI is being integrated into society's fabric, including justice systems and economies, making its accessibility and understanding crucial.

Tools to measure AI's impact can help create guardrails to protect society and the planet.

The speaker's response to the email was to focus on current tangible impacts and the work that can be done to reduce them.

AI's development is ongoing, and society can collectively decide its direction.