The European Union’s AI Act, which could be in place in the next two to three years, will be the first attempt by a Western government to regulate artificial intelligence.
- The act would ban AI in extreme cases, such as for the kind of social scoring used in China, where citizens earn credit based on surveilled behaviour.
- The act would also put guardrails on generative AI, an umbrella category of machine-learning algorithms capable of creating new images, video, text and code.
- Big tech companies have welcomed the EU’s approach to regulating AI, but they have also tried to soften the edges of the legislation.
- For example, companies have argued that users should also be liable for how AI systems are used.
- IBM has also lobbied to ensure that “general-purpose AI” is excluded from the regulation.
- The European Parliament has taken a tougher approach to regulating AI than the European Commission, and has mandated that developers of “foundation models” like OpenAI must summarize the copyrighted materials used to train large language models, assess the risks that the system could pose to democracy and the environment, and design products incapable of generating illegal content.