Italian framework law on artificial intelligence
By Cecilia Trevisi
On 10 October 2025, the Italian Framework Law on Artificial Intelligence (Law No. 132/2025) entered into force, marking the transition from a “tolerated” and “spontaneous” technological use to an authorised and regulated use, aligned with the standards of the AI Act (Regulation (EU) 2024/1689).
Law No. 132/2025 consists of 28 articles divided into six chapters: 1) Principles and aims; 2) Sector-specific provisions; 3) National strategy, national authorities and promotional measures; 4) Provisions protecting users and on copyright; 5) Criminal provisions; and 6) Final and financial provisions.
The law pursues the objective of fostering technological development that enables human beings to retain control over decisions within automated processes. The first group of articles is devoted to the statement of general principles. In particular, Article 2 provides three definitions: “artificial intelligence system”, “data”, and “artificial intelligence models”.
Within the definitions, the distinction between “systems” and “models” is immediately apparent. That distinction is to be found in the glossary of the AI Act.
A model is a trained function, whereas a system is the complete application that uses one or more models. For example, the model is the engine of a car, while the system is the entire vehicle. A large language model (LLM) that generates text is a model, i.e. the cognitive engine; the system, by contrast, is the surrounding engineering that ensures operation, security and usability.
As regards “data”, the law sets an internal definition which nonetheless mirrors the wording adopted at the EU level. Data means any digital representation of acts, facts, or information, and any collection of such acts, facts, or information, including in the form of sound, visual or audiovisual recordings. Data quality becomes an essential condition for the reliability of AI systems, because the correctness of algorithmic output depends on it.
With reference to the protection of personal data, a specific provision is included to ensure access to AI technologies by minors under 14 years of age, for whom the consent of the person exercising parental responsibility is required; conversely, a minor who has reached 14 years of age may autonomously give consent for the processing of personal data connected with the use of artificial intelligence systems.
Article 3 sets out general principles that must apply throughout the entire life cycle of an AI system and model: transparency, proportionality, security, protection of personal data, confidentiality, accuracy, non-discrimination, gender equality and sustainability, and respect for human autonomy and decision-making power.
The law also regulates the adoption of AI in healthcare, justice, and employment.
Particular attention is devoted to persons with disabilities, who must be guaranteed full access to AI systems, which are also considered relevant in facilitating access to sport, without prejudice or exclusion.
Cybersecurity cuts across all principles, since the protection of rights and freedoms cannot be ensured without an adequate level of technological security and effective control of digital vulnerabilities.
In the judicial sphere, it is provided that functional jurisdiction for “actions concerning the operation of an artificial intelligence system” lies with the Tribunal (first-instance court), and, in certain specific cases, jurisdiction will be vested in the specialised business sections.
In the field of copyright, the new legislation specifies that authorship of works protected by copyright belongs exclusively to a human being, even where the individual makes use of AI tools, provided that the creation of the work nevertheless derives from the author’s intellectual effort. This will inevitably lead to an assessment of the creative contribution of individual instructions (prompts) to the final result (output), without overlooking the influence that may be attributed to other stakeholders when assessing creativity – particularly the authors of the input data and the designers of the learning (training) base.
In criminal matters, penalties have been increased for offences committed by means of AI systems; punishability has been extended to conduct related to the reproduction or extraction of text or data from works or other materials available online; and a new offence has been introduced concerning the unlawful dissemination of content generated or manipulated through artificial intelligence systems.
In any event, from a practical standpoint, in order to understand the concrete operation of the new law, we will have to wait for its application in the first cases.
Cecilia Trevisi is a Lawyer Partner with Sofiae - Chartered Accountants & Lawyers. She’s a Member of the Milan Bar Association, specialised in copyright and related rights, industrial property rights, commercial and corporate matters. She is a regular speaker at conferences and author of articles on copyright and industrial law. Cecilia is also a Member of AIPPI (International Association for the Protection of Intellectual Property).
