EU’s new AI Act draft sets in motion the obligations for AI systems

 Introduction:

 Recently, the council presidency and the EU parliament negotiators have come to address concerns regarding AI draft regulation which proposes harmonized rules on AI. The preliminary scope of the Artificial Intelligence Act is to ensure that the AI systems used in EU markets are safe and protect the fundamental rights of the citizens. It also aims to promote the safe use of AI to develop technological growth. The AI Act is considered to be a unique kind of legislation which fosters development and balances the disproportionality of the use of AI technology. A key difference in the Act when compared to other legislative documents of the EU is its risk-based approach towards artificial intelligence. It imposes strict rules on higher-risk models of AI systems.

 Implementation:

 The AI Act would be enforceable and applied to various AI systems in a narrow perspective. For the first year of enforcement, the Act would focus on general AI systems and keeps in check their obligations. After the first six months of enforcement, the AI Act would provide prohibitions on unacceptable AI practices. Similarly, the obligations for high-risk AI systems would not be in force until 36 months after the Act was implemented and assented by the union. The European Commission implementing act on post-market monitoring and the list of elements that must be included in the monitoring will be applicable 18 months after entry into force.

 Proposals : 

  1. Definitions: AI system is defined as a machine-based system designed to operate with various levels of autonomy. General purpose AI system is defined as AI systems which have a wide range of uses including direct use and integration of such systems in other programmers.
  2. Classifications: AI systems according to the Act is classified into three categories. They are Prohibited AI systems which completely violate the fundamental rights of the individuals. An example includes AI system which uses personal and sensitive personal data. Secondly, AI systems were classified into high-risk AI systems which have more obligations than regular AI systems and high-level restrictions are imposed. Third category was transparency risks AI models. These models do not possess high risk infringement but have transparency risks. The EU proposes transparency requirement under the Act.
  3. AI literacy: "AI literacy" is emphasised in Recital 9(b) in order to give all relevant actors in the AI value chain the insights required to ensure the appropriate compliance and its correct enforcement. It means that the member states must be in accordance with the regulation henceforth the legal enforcement. Having the abilities and knowledge necessary to properly use AI applications and technology is known as AI literacy. It involves taking a critical look at these technologies, comprehending their background, and challenging their conception and application.
  4. High-Risk AI models: High risk AI system models are specified in Annex III of the AI Act. High-risk AI systems, according to Recital 48, should be built so that "natural persons can oversee their functioning" and that "impacts are addressed over the system's lifecycle." It is required that those in charge of such operations possess "the necessary competence, training, and authority to carry out that role." Recital 58 has language requiring those who implement high-risk systems to keep proper records of their actions, monitor them effectively, and make sure they offer obligations and usage instructions. Once more, people assigned to these responsibilities ought to have the necessary training."
  5. General AI systems: Recital 60(q) of the draft regulation talks about general AI systems. Recital 60(q) includes language noting that providers should "continuously assess and mitigate systemic risks, including for example putting in place risk-management policies" that include accountability processes. General Purpose AI system is inclusive of systems like Open AI, ChatGPT or AI image generators.
  6. Biometric Identifications: Article 5(1) of the proposed AI Act deals with the risk of leak of biometrical identifications. The problem persists with facial recognition and deep fakes within the AI technology which causes a lot of unprecedented risk with personal rights. It is now forbidden to use AI for biometric categorization with the intention of making judgements on a natural person's race, political opinion, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. There is, however, an exception for usage in the field of law enforcement and for tagging and filtering legally obtained biometric data sets. As per Article 29a of the AI Act, real-time biometrics of specific wanted individuals. However, a binding decision is required to access that remote identification.
  7. Retention Period: The new draft regulations provide for retention period. The retention period is meant to be the storage limitation through time after which the document or information must be deleted. For high-risk AI documentation, the maximum period of retention is 10 years after the system has been placed on the market. For AI systems that automatically creates documents such as verification of contracts, the retention period is 6 months. For documents pertaining to the introduction of a new AI system, at least 10 years retention period must be followed.
  8. Copyrights law: A newly included Art. 28b of the AI Act was one of the topics of a compromise proposal on the Act released by the EU European Parliament in June 2023. Specifically, the goal was to establish transparency requirements, which state that companies that offer generative AI models must record and make publicly available a comprehensive overview of how they use training material that is protected by copyright. This has been replaced by the AI Act which now contains recitals 60f - 60ka, which are particularly relevant from a copyright perspective and which in turn take up parts of the Art. 28b proposal. In the years to come, there will be exemptions from the transparency standards for AI model providers that make their models available under an open-source license or for non-commercial or scientific research reasons. These models are subject to significantly less stringent restrictions than others. The obligations for large generative AI models are much more extensive. Any use of copyright-protected content generally requires the permission of the rights holder concerned, unless exceptions and limitations of Directive on Copyright in the Digital Single Market) apply.
  9. AI Office: The previously described duties will be observed by the as-yet-to-be-established AI Office. The AI Act does not now specify how much autonomy the AI Office will have. Therefore, in order to make this issue clear, we still need to wait for a comparable resolution. The AI Office will be in charge of keeping an eye on whether the providers really carry out the necessary plans and procedures to abide by copyright laws, as well as if they release an overview of the training's content to the public.
  10. Obligations: The Obligations for high-risk AI systems subjects strict requirements such as:

       Data governance requirements such as mitigation.

      Drafting and maintaining technical documentation

      Complying with registration obligations and other procedures.

      Implementing a risk-free AI model.

The developer obligations are:

       Assigning human oversight to the AI system developed

      Data management and ensuring non-encroachment of privacy

      Data privacy protection assessment should be done

  1. Innovation-based recitals: Article 53 of the draft protocol has stated that AI regulation should encourage innovation-based models for development, testing and validation. Additionally, Articles 54a and 54b allows unsupervised testing of AI systems with conditions and safeguards.
  2. Law enforcement: Law enforcement authorities can without any exceptions assess and demand authorization for an AI model which is associated with high risk and has transparency risks. It can also order for stoppage of AI using when the commencement of the AI programme has been done without prior authorization. This is provided in Article 1 to 44 of the law enforcement directive.

Conclusion:

 Even though there are still a number of unsolved questions in the current draft and official EU member state approval is still pending, companies can already take action based on it. Providers of high-risk AI should now concentrate more on preparing the necessary documentation and coming up with a retention plan.



Comments