The European Union (EU) has developed a new and unique legislative framework for Artificial Intelligence (AI) systems put in place on the internal market. Regulation 2024/1689, called “Artificial Intelligence Act”, entered into force on 1 August 2024 and will be progressively applicable between 2 February 2025 and 2 August 2027 depending on the provisions concerned. The text is underpinned by a dual rationale. On the one hand, it aims to ensures the free movement of AI-based goods and services while supporting innovation and economic growth in the EU. On the other hand, it seeks to promote trustworthy AI systems, guaranteeing the protection of health, safety and fundamental rights against harmful effects these systems may have on people and on society.
The AI Act provides for three main categories of legal provisions: first, a list of prohibited AI practices; second, harmonised rules applicable to marketed AI systems, following a risk-based approach (including provisions on innovation and on general purpose AI models); and third, a comprehensive public enforcement scheme. It consists of more than 100 articles, 180 recitals and 13 annexes. It is thus a massive and complex regulatory framework that both public and private organisations – dealing with AI systems and active on the EU market – will have to master and implement in the upcoming months and years. Therefore, it will be crucial for AI industry and AI practioners, including public authorities, to set up an action plan to comply with the AI Act.
The AAIAC workshop offers a two-days’ workshop exploring ways and methodologies to achieve the EU AI Act compliance. The objective is to analyse the new legal framework according to a “compliance mode” aimed at professional operators and interested parties (such as civil society representatives) of the AI ecosystem. The workshop is conceived as a platform for sharing legal knowledge and compliance practices, based on both academic and practical analysis.
The methodology adopted is inspired by ISO standards on management systems and its continuum in the field of AI. One of the main goals of implementing a compliance management system for an organisation is to be able to demonstrate its commitments to comply with relevant normative frameworks, whether based on hard or soft law, including best practices and ethics. Transposed to the field of AI, management system’s standards offers a “structured way [for organisations] to manage risks and opportunities associated with AI, balancing innovation with governance”. This standard rationale takes on a totally new dimension in the forthcoming EU AI regulatory context. It is therefore essential to explore how the AI management system of organisations should be adapted in practice to the AI Act’s requirements and obligations. The workshop will open discussions on explaining, commenting on and questioning the AI Act with a view to its implementation by AI operators, including within their management system.
Program
Practical information
- Open to all with registration
- UCLy | Campus Saint-Paul | Maison de la Recherche et de l'Entreprise (Bâtiment G, 4ème étage)