Friday, November 22, 2024

Decoding the AI Act – From the EU to the world

Must read

Diplomat Magazine
Diplomat Magazinehttp://www.diplomatmagazine.eu
DIPLOMAT MAGAZINE “For diplomats, by diplomats” Reaching out the world from the European Union First diplomatic publication based in The Netherlands. Founded by members of the diplomatic corps on June 19th, 2013. "Diplomat Magazine is inspiring diplomats, civil servants and academics to contribute to a free flow of ideas through an extremely rich diplomatic life, full of exclusive events and cultural exchanges, as well as by exposing profound ideas and political debates in our printed and online editions." Dr. Mayelinne De Lara, Publisher

By Henri Estramant, LLM

On 21 April 2021, the European Commission unveiled its proposal for a Regulation on Artificial Intelligence, henceforth referred to as the “AI Act”. This seminal legal proposition seeks to establish a harmonized set of rules governing the creation, marketing, and application of AI within the European Union.

The specific stipulations, chiefly pertaining to data integrity, transparency, human supervision and accountability, are contingent upon the risk stratification of the AI under consideration. This risk spectrum spans from high to low, with an outright proscription of certain AI applications. In an analogous fashion to the GDPR, the AI Act is anticipated to be a cornerstone legislation for the European Union, including an extraterritorial ambit and substantial penalties—including potential fines of up to €30 million or 6% of global annual turnover of a company found in breach thereof.

The most recent development transpired on Wednesday, 14 June 2023 with a plenary vote at the European Parliament. The voted proposal included a litany of proposed amendments that had already been included by the relevant committees back on 11 May 2023. The most prominent key revisions were:

  • Universally applicable AI principles: Newly instated provisions contain “general” AI principles applicable to all AI systems, regardless of whether they are “high-risk”. This substantially broadens the AI Act’s jurisdiction. Simultaneously, MEPs extended the classification of high-risk applications to include those that pose threats to human health, safety, fundamental rights or the environment. Of particular note is the inclusion of AI in recommendation systems employed by social media platforms (with a user base exceeding 45 million under the EU’s Digital Services Act) to the high-risk category.
  • Prohibited AI practices: MEPs significantly revised the “unacceptable risk/prohibited list” to include invasive and discriminatory uses of AI systems. These prohibitions now apply to several uses of biometric data, including the indiscriminate harvesting of biometric data from social media for the creation of facial recognition databases.
  • Foundation models: Although previous iterations of the AI Act primarily focused on ‘high-risk’ AI systems, MEPs introduced a new framework for all foundation models. This framework, which mandates providers of foundation models to ensure robust protection of fundamental rights, health and safety, the environment, democracy, and the rule of law, would particularly impact providers and users of generative AI. These providers would also need to evaluate and mitigate risks, comply with design, information, and environmental requirements, and register in the applicable EU database.
  • User obligations: ‘Users’ of AI systems are now referred to as ‘deployers’, a welcome clarification given that the previous term did not adequately distinguish between the deployer and the ‘end user’. This change infers ‘deployers’ must now adhere to an expanded range of obligations, such as the duty to conduct an extensive AI impact assessment. Concurrently, end user rights are enhanced, with end users now accorded the right to receive explanations about decisions made by high-risk AI systems.

The AI Act proposal has now moved forth to the final stage of the legislative process, commencing tripartite negotiations with the European Council and the European Commission on the AI Act’s definitive form; in EU jargon it is the so-called ‘trilogue’ phase.

If timelines are adhered to -very unlikely-, the AI Act may become a pioneer legislation in this field, leaving behind other major global players; just as with the GDPR it may become a paradigm for those late-coming regulators.

What about the United Kingdom?

Europe does not equal the EU, and so the AI Act will not cover non-EU member, the United Kingdom, albeit it will likely be incorporated into the legal framework of Norway, Iceland, and Liechtenstein that are non-EU members but European Economic Area member states.

In September 2021, the UK government announced a 10-year plan, described as the ‘National AI Strategy’. The National AI Strategy aims at investing and planning for the long-term needs of the AI ecosystem, support the transition to an AI-enabled economy and ensure that the UK gets the national and international governance of AI technologies ‘right’.

More recently, on 29 March 2023, the UK Government published its long-anticipated Artificial Intelligence white paper. Branding its proposed approach to AI regulation as ‘world leading’ in a bid  to ‘turbocharge growth’, the whitepaper provides a cross-sectoral, principles-based framework to increase public trust in AI and develop capabilities in AI technology. The five principles intended to underpin the UK’s regulatory framework are:

  1. Safety, security and robustness;
  2. Appropriate transparency and explainability;
  3. Fairness;
  4. Accountability and governance; and
  5. Contestability and redress.

The UK Government has said it would avoid ‘heavy-handed legislation’ that could stifle innovation which means in the first instance at least, these principles will not be enforced using legislation. Instead, responsibility will be given to existing regulators to decide on ‘tailored, context-specific approaches’ that best suit their sectors. Already London has become the first non-US home for an OpenAI office, albeit the UK market limps behind the EU in terms of potential regulation. On the other hand, the English language and the welcoming approach by the British government to AI technologies is noteworthy. Prime Minister Rishi Sunak has himself described the UK as ‘global home of artificial intelligence regulation’, although concrete legislation has yet to materialize.

May Switzerland serve as a further hub?

The Swiss government’s commitment to fostering an innovation-friendly environment sets it apart on the international stage. This commitment was clearly evidenced by the establishment of the ‘Crypto Valley’ in the Canton of Zug, a highly successful initiative that has attracted a multitude of blockchain and cryptocurrency firms. This willingness to embrace new technologies, combined with Switzerland’s robust yet ‘flexible’ regulatory environment, provides an ideal setting for AI companies to thrive. Just as Zug became the hotspot for blockchain, it could similarly serve as the focal point for the country’s burgeoning AI sector?

Switzerland lies outside the European Economic Area (EEA), companies operating there however have managed to conduct business within the EEA through bilateral agreements between Switzerland and the EU. While not an EU member state, Switzerland has over 120 bilateral agreements with the EU, which allows her to partake in the EU’s single market. This means that businesses in the Swiss Confederation can trade with EEA countries virtually as if they were part of the EEA themselves. It remains to be seen whether the latter model can be replicated for AI companies wishing to settle their headquarters in Switzerland.

Thus far companies based in Switzerland often establish subsidiaries, or branches within the EEA, providing a direct presence in the area. Ergo, they ensure compliance with EU laws and regulations wherein they operate.

About the author:

Henri Estramant

Henri Estramant, LLM is a former consultant at the Panel for the Future of Science and Technology of the European Parliament. He is an expert in AI & Crypto regulation – certified in Conversational and Deploying AI.

Currently he is enrolled in the ‘Artificial Intelligence: Implications for Business’ Executive education program from MIT Sloan School of Management and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

- Advertisement -spot_img

More articles

- Advertisement -spot_img

Latest article