World’s Most Extensive AI Rules Approved in EU

Bloomberg Technology
13 Mar 202403:44

TLDRThe EU's new AI legislation has sparked both celebration and concern, particularly among tech companies and startups who fear overregulation could hinder their competitiveness with US counterparts. The law adopts a risk-based approach, banning high-risk AI applications like emotion recognition and social scoring, while requiring companies to prove their AI systems' safety and compliance with regulations. A new AI office in Brussels will act as a regulatory force, with the power to demand information and ban non-compliant applications. Despite initial resistance, some companies are now partnering with larger entities and committing to self-regulation to demonstrate their seriousness.

Takeaways

  • 🎉 The signing off on the AI act in Brussels is a significant event, marking a new era in AI regulation within the EU.
  • ⏳ Tech companies, including European startups, have expressed concerns about overregulation, fearing it may hinder their ability to compete with US and Chinese counterparts.
  • 🚫 The EU's approach is risk-based, focusing on the use of AI rather than the technology itself, banning the worst-case scenarios like emotion recognition in schools or workplaces, and social scoring systems.
  • 🛑 High-risk AI applications, such as those used in migration or job applications, will require companies to perform additional checks to prove safety to regulators.
  • 🔍 A new AI office in Brussels will act similarly to a police force, with the power to request detailed information from companies and potentially ban applications that do not comply.
  • 🤝 Gillian Misrule's partnership with Microsoft highlights the complex relationship between tech companies and regulation, as they both push for it and lobby against strict controls.
  • 🌐 The EU's legislation could influence how other parts of the world adopt AI regulations, as seen with tech companies globally advocating for regulatory frameworks.
  • 🤔 The dichotomy between lobbying against strict controls and later partnering with major tech companies like Microsoft has left some lawmakers with a sense of distrust towards the tech industry.
  • 📝 Some tech companies are adopting a proactive approach by signing voluntary commitments to demonstrate their commitment to responsible AI development and use.
  • 💡 The discussion around AI regulation reveals a potential self-serving aspect of tech companies' interests, which may not always align with the broader societal good.

Q & A

  • What is the main concern of tech companies regarding the new act?

    -Tech companies are primarily concerned about overregulation, which they fear might put the European continent behind its U.S. and other international counterparts.

  • What do European tech companies and startups worry about in terms of regulation?

    -European tech companies and startups worry that excessive regulation could hinder their ability to compete with U.S. hyperscalers, as it may limit their operational flexibility and innovation capacity.

  • What is Mistral's main activity?

    -Mistral is engaged in building large language models and is based in Europe.

  • How does the EU's approach to AI regulation focus on risk?

    -The EU's approach is risk-based, focusing on the use of technology rather than the technology itself, aiming to ban the worst possible uses of AI, such as emotion recognition in workplaces or social scoring.

  • What are some high-risk AI applications that are restricted under the new act?

    -High-risk AI applications include those used for migration applications, job applications, emotion recognition in workplaces or schools, and social scoring systems that give citizens a score based on their behavior.

  • What additional checks might companies like OpenAI or Mistral have to perform?

    -Companies like OpenAI or Mistral may need to perform more checks to prove to regulators that their AI systems are safe and comply with the new regulations.

  • What are the exceptions that companies lobbied against in the new act?

    -Companies lobbied against additional controls on general-purpose or generative AI, which are not based on the use of AI but rather explicitly regulate the technology itself.

  • What will companies have to prove to regulators regarding their AI systems?

    -Companies will have to prove their AI systems' energy consumption and compliance with copyright laws to regulators.

  • What is the role of the new AI office being set up by the EU in Brussels?

    -The new AI office in Brussels will operate almost like a police force, able to request more data on how companies train their large language models and potentially ban an application if it's performing poorly.

  • How has the partnership between Gillian Misrule and Microsoft affected EU lawmakers' perception?

    -The partnership has left a bad taste in the mouths of many lawmakers, as it seems contradictory to the lobbying efforts against overregulation.

  • What is the significance of tech companies pushing for regulation while lobbying against strict controls?

    -It signifies that while tech companies recognize the need for some level of regulation, they are also keen on ensuring that the regulations do not overly restrict their operations and global competitiveness, especially against U.S. companies.

  • How are some companies demonstrating their commitment to serious AI regulation?

    -Some companies are signing on for voluntary commitments, trying to prove to governments that they are taking the issue of AI regulation seriously and are willing to make necessary commitments.

Outlines

00:00

📜 EU Legislation and Tech Company Concerns

The video script discusses the recent EU legislation on AI and its impact on tech companies. It highlights the concerns of companies, especially those based in Europe, about overregulation and the potential disadvantage compared to their U.S. and other global counterparts. The conversation includes the perspective of Mistral, a large language model developer based in Europe, who is now faced with the challenge of complying with the new act. The EU's risk-based approach to AI regulation is explained, focusing on banning the worst uses of AI, such as emotion recognition in schools or workplaces and social scoring. High-risk AI applications, like those used in migration or job applications, require companies to perform additional checks to prove safety to regulators. The script also touches on the lobbying efforts of tech companies against strict controls and the potential hypocrisy in partnering with major corporations like Microsoft after advocating for less regulation.

Mindmap

Keywords

💡Lobbying

Lobbying refers to the act of attempting to influence decisions made by government officials, often by providing them with information or arguments. In the context of the video, it is mentioned that lawmakers have been heavily lobbied regarding a particular act, indicating that various interest groups have been actively trying to sway their decisions.

💡Overregulation

Overregulation is the imposition of excessive or overly restrictive regulations on a particular industry or activity. In the video, tech companies express their concern that the European continent might fall behind its U.S. and Asian counterparts due to stricter regulations on AI, which could stifle innovation and competitiveness.

💡Risk-based approach

A risk-based approach focuses on identifying, assessing, and addressing risks in a manner proportional to their potential impact. In the context of the video, the EU's regulatory approach towards AI is described as risk-based, meaning that it targets the most harmful uses of AI technology rather than the technology itself.

💡High-risk AI systems

High-risk AI systems are those that have the potential to cause significant harm to individuals or society due to their application or the context in which they are used. The video discusses how companies like Mistral, which are developing large language models, will need to perform additional checks to demonstrate to regulators that their AI systems are safe and comply with regulations.

💡EU AI legislation

EU AI legislation refers to the set of laws and regulations that the European Union is establishing to govern the use and development of artificial intelligence within its member states. These regulations aim to balance innovation with the protection of fundamental rights and values.

💡AI office in Brussels

The AI office in Brussels is a regulatory body that the EU is setting up to oversee and enforce compliance with AI regulations. This office will have the authority to request information, inspect companies, and potentially ban applications that do not meet the regulatory standards.

💡Energy consumption

Energy consumption refers to the amount of energy used by a process or system. In the context of the video, companies developing AI technologies will need to demonstrate their compliance with regulations that may include requirements related to energy efficiency or environmental impact.

💡Copyright laws

Copyright laws protect the rights of creators over their original works, including literary, musical, artistic, and other intellectual creations. In the context of the video, AI companies are required to prove that they are complying with copyright laws, ensuring that the content generated by their AI systems does not infringe on the rights of others.

💡Voluntary commitments

Voluntary commitments are pledges or promises made by companies to adhere to certain standards or practices, often in areas where regulation is not yet in place or is less stringent. In the video, some tech companies are signing on for voluntary commitments to show governments that they are taking the issue of AI regulation seriously.

💡Self-serving

Self-serving behavior is when individuals or organizations act in their own interest, often to the potential detriment of others or the broader community. In the context of the video, it suggests that tech companies may be advocating for certain regulations that benefit their own positions in the market, rather than considering the broader implications for the industry or society.

Highlights

Lawmakers have been heavily lobbied regarding the act.

It's a celebratory day in Brussels as the parliament signs off on the act.

Tech companies express concerns over overregulation and its impact on competitiveness.

European tech companies and startups worry about being overregulated compared to U.S. and Chinese counterparts.

Mistral, a European-based company building large language models, is affected by the new act.

The EU's approach to AI regulation is risk-based, focusing on the use of technology rather than the technology itself.

AI systems are prohibited from using emotion recognition in workplaces or schools and social scoring.

High-risk AI applications, such as migration applications or job applications, require more checks to prove safety.

Companies like OpenAI and Mistral will have to demonstrate compliance with new regulations.

The new AI office in Brussels will operate like a police force, monitoring and potentially banning applications.

Gillian Misrule's partnership with Microsoft raises questions about the influence of tech companies on legislation.

Tech companies initially pushed for regulation but lobbied against strict controls.

Some companies are signing on for voluntary commitments to show their seriousness to governments.

The EU's AI legislation may influence how the rest of the world adopts AI laws.

There's a concern that tech companies' lobbying efforts may be self-serving.

The act's passage may lead to a shift in how tech companies approach regulation and their own practices.