
FERMA issues policy note on AI
The Federation of European Risk Management Associations (FERMA) has issued a Policy Note on the European Union (EU)’s Artificial Intelligence Act which provides guidance on the practical implications of the risk-based approach underpinning the legislation and considers the potential insurance impact.
The EU AI Act, published in July, will apply to all 27 EU Member States with companies expected to comply starting in February 2025. It aims to create a high level of protection of health, safety and fundamental rights against the potential harmful effects of AI systems. The risk-based approach at its core classifies AI systems from low or minimal risk to unacceptable risk, with most regulatory requirements applying to high-risk systems.
Under the legislation, high-risk systems must be registered in an EU database and must comply with specific obligations relating to data training and governance, transparency and risk management systems.
“The AI Act is arguably one of the most significant regulations introduced by the EU in recent years given the potential impact of AI across every aspect of our lives,” says Philippe Cotelle, Board Member, FERMA and Chair of the Digital Committee. “It not only places a clear onus on risk managers to raise their game on AI, but it also addresses another piece of the puzzle which is how this all impacts upon topics such as liability and innovation.”
The Policy Note highlights three essential pillars of an approach aimed at making the most out of the new requirements, which can act as a basis for risk managers to consider in their organisations:
Development of an AI strategy and transposition into a suitable governance framework, which can be demonstrated by a policy document and end-to-end processes implementation.
Implementation of the appropriate technology and investment in the continuous training of employees and partners, as well as providing documentation and guidance for customers.
Governance and technology are designed in a way that anticipates audit requirements; and, pursuing a formal certification is recommended, although not explicitly required by law.
In this context, FERMA advises risk managers to follow an internationally recognised ethical standard, to clearly define the scope of the policy and roles and responsibilities, and to consider the scope of the environment in which their organisation’s AI system operates.
The Policy Note calls on companies to invest in safe technology implementation, as well as training. FERMA encourages risk managers to consider creating an internal set of benchmarks to measure AI system performance, and to ensure users are trained to mitigate the risk of misuse, unethical outcomes, potential biases, inaccuracy, and data and security breaches. All uses of the system, it adds, must align with the AI policy.
“FERMA research has shown that most risk managers are focused on addressing AI-related risks,” said Typhaine Beaupérin, chief executive of FERMA, “with key responsibilities including monitoring of regulatory developments and developing internal policies to govern the use of AI in business-related activities. Having clear and targeted guidance on how the evolving legislative environment directly impacts businesses is critical to supporting practitioners in addressing this rapidly evolving risk.”
Did you get value from this story? Sign up to our free daily newsletters and get stories like this sent straight to your inbox.