The Washington Post reports that after a marathon 72-hour debate European Union legislators Friday have reached a historic deal on a broad-ranging AI safety development bill, the most expansive and far-reaching of its kind to date. Details of the deal itself were not immediately available.
Deal!#AIAct pic.twitter.com/UwNoqmEHt5
— Thierry Breton (@ThierryBreton) December 8, 2023
Ongoing negotiations over the proposed rules had been disrupted in recent weeks by France, Germany and Italy, who had reportedly stonewalled talks over rules guiding hoiw EU member nations could develop “foundational models,” generalized AI models from which more specialized applications can be fine-tuned from. OpenAI’s GPT-4 is a foundational model, for example, as all of the other ChatGPT, GPTs, BingChat and third-party applications are trained from from its base functionality.
The proposed regulations would dictate the ways in which future machine learning models can be developed and distributed within the trade bloc, impacting its use in applications ranging from education to employment to healthcare. AI development would be split among four categories, depending on how much societal risk each potentially poses — minimal, limited, high, and banned.
Banned uses would include anything that circumvents the user’s will, targets protected groups or provides real-time biometric tracking (like facial recognition). High risk uses include anything “intended to be used as a safety component of a product,” or are to be used in defined applications like critical infrastructure, education, legal/judicial matters and employee hiring. Chatbots like ChatGPT, Bard and Bing would fall under “limited risk” metrics.
“The European Commission once again has stepped out in a bold fashion to address emerging technology, just like they had done with data privacy through the GDPR,” Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley, told Engadget in 2021. “The proposed regulation is quite interesting in that it is attacking the problem from a risk-based approach,” similar what’s been suggested in Canada’s proposed AI regulatory framework.
The EC had previously addressed the growing challenges of managing emerging AI technologies through an variety of efforts, releasing both the first European Strategy on AI and Coordinated Plan on AI in 2018, followed by the Guidelines for Trustworthy AI in 2019. The following year, the Commission released a White Paper on AI and Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics.
”Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being,” the European Commission wrote in its draft AI regulations. “Rules for artificial intelligence available in the Union market or otherwise affecting Union citizens should thus put people at the centre (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights.”
“At the same time, such rules for artificial intelligence should be balanced, proportionate and not unnecessarily constrain or hinder technological development,” it continued. “This is of particular importance because, although artificial intelligence is already present in many aspects of people’s daily lives, it is not possible to anticipate all possible uses or applications thereof that may happen in the future.”
More recently, the EC has begun collaborating with industry members on a voluntary basis to craft internal rules that would allow companies and regulators to operate under the same agreed-upon ground rules. “[Google CEO Sundar Pichai] and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline,” European Commission (EC) industry chief Thierry Breton said in a May statement. The EC has entered into similar discussions with US-based corporations as well.
“This legislation will represent a standard, a model, for many other jurisdictions out there,” Dragoș Tudorache, a Romanian lawmaker co-leading the AI Act negotiation, told WaPo, “which means that we have to have an extra duty of care when we draft it because it is going to be an influence for many others.”
Developing…
This article originally appeared on Engadget at https://www.engadget.com/the-eu-has-reached-a-historic-regulatory-agreement-over-ai-development-232157689.html?src=rss The Washington Post reports that after a marathon 72-hour debate European Union legislators Friday have reached a historic deal on a broad-ranging AI safety development bill, the most expansive and far-reaching of its kind to date. Details of the deal itself were not immediately available.
Deal!#AIAct pic.twitter.com/UwNoqmEHt5— Thierry Breton (@ThierryBreton) December 8, 2023
Ongoing negotiations over the proposed rules had been disrupted in recent weeks by France, Germany and Italy, who had reportedly stonewalled talks over rules guiding hoiw EU member nations could develop “foundational models,” generalized AI models from which more specialized applications can be fine-tuned from. OpenAI’s GPT-4 is a foundational model, for example, as all of the other ChatGPT, GPTs, BingChat and third-party applications are trained from from its base functionality.
The proposed regulations would dictate the ways in which future machine learning models can be developed and distributed within the trade bloc, impacting its use in applications ranging from education to employment to healthcare. AI development would be split among four categories, depending on how much societal risk each potentially poses — minimal, limited, high, and banned.
Banned uses would include anything that circumvents the user’s will, targets protected groups or provides real-time biometric tracking (like facial recognition). High risk uses include anything “intended to be used as a safety component of a product,” or are to be used in defined applications like critical infrastructure, education, legal/judicial matters and employee hiring. Chatbots like ChatGPT, Bard and Bing would fall under “limited risk” metrics.
“The European Commission once again has stepped out in a bold fashion to address emerging technology, just like they had done with data privacy through the GDPR,” Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley, told Engadget in 2021. “The proposed regulation is quite interesting in that it is attacking the problem from a risk-based approach,” similar what’s been suggested in Canada’s proposed AI regulatory framework.
The EC had previously addressed the growing challenges of managing emerging AI technologies through an variety of efforts, releasing both the first European Strategy on AI and Coordinated Plan on AI in 2018, followed by the Guidelines for Trustworthy AI in 2019. The following year, the Commission released a White Paper on AI and Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics.
”Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being,” the European Commission wrote in its draft AI regulations. “Rules for artificial intelligence available in the Union market or otherwise affecting Union citizens should thus put people at the centre (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights.”
“At the same time, such rules for artificial intelligence should be balanced, proportionate and not unnecessarily constrain or hinder technological development,” it continued. “This is of particular importance because, although artificial intelligence is already present in many aspects of people’s daily lives, it is not possible to anticipate all possible uses or applications thereof that may happen in the future.”
More recently, the EC has begun collaborating with industry members on a voluntary basis to craft internal rules that would allow companies and regulators to operate under the same agreed-upon ground rules. “[Google CEO Sundar Pichai] and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline,” European Commission (EC) industry chief Thierry Breton said in a May statement. The EC has entered into similar discussions with US-based corporations as well.
“This legislation will represent a standard, a model, for many other jurisdictions out there,” Dragoș Tudorache, a Romanian lawmaker co-leading the AI Act negotiation, told WaPo, “which means that we have to have an extra duty of care when we draft it because it is going to be an influence for many others.”
Developing…This article originally appeared on Engadget at https://www.engadget.com/the-eu-has-reached-a-historic-regulatory-agreement-over-ai-development-232157689.html?src=rss Read More Politics & Government, site|engadget, provider_name|Engadget, region|US, language|en-US, author_name|Andrew Tarantola Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics