Transparency refers to the principle that the workings of artificial intelligence (AI) systems should be understandable to humans.
It is important to make sense of how AI makes decisions, processes data, and arrives at outcomes as AI systems are being deployed more often. Considering the wide forms of AI use, like employment, loan applications, and medical diagnostics, AI systems can have a big influence on how certain processes develop and thus we have to be sure we can trust the AI system.
The EU’s AI Act requires AI systems to be designed and developed in a way that you should be informed that you are interacting with an AI system.
example If you are applying for a job and the employer is using an AI system in the selection process, you should be made aware of this at the time of the first interaction or exposure to the AI system at the latest.
example AI systems that generate synthetic audio, image, video or text (including deepfakes) content must mark the content as artificially generated or manipulated.
While the obligations differ by the type and risk of the AI system, generally providers of AI systems have to:
- draw up and keep the technical documentation up to date
- put in place a policy to comply with the laws on copyright and related rights
- draw up and make a detailed summary publicly available about the content used for training the AI
- have a risk management system in place
Transparency is important for you to understand how you have been affected by the AI system and to investigate possible violations of your rights. Read about how to complain in this Guide.