The EU Parliament accepted the Synthetic Intelligence (AI) Act at the moment. Member states agreed upon the regulation in December 2023. Immediately, members of the European Parliament endorsed the act, with 523 voting in favor, 46 voting towards, and 49 abstaining from the vote.
It’s no secret that AI is a double-edged sword. For each constructive use case, there are a number of methods people can use the know-how for nefarious functions. Regulation is usually efficient in creating safeguards for the adoption of recent applied sciences. Nonetheless, delineating the boundaries of AI’s functions and capabilities is difficult. The know-how’s huge potential makes it troublesome to eradicate destructive makes use of whereas accommodating constructive ones.
Due to this, the European Union’s new Synthetic Intelligence Act can have each constructive and destructive impacts on banks and fintechs. Organizations that study to adapt and innovate inside the boundaries will see essentially the most success on the subject of leveraging AI.
That mentioned listed here are 4 main implications the brand new legislation can have on banks:
Prohibited AI functions
The brand new legislation prohibits the usage of AI for emotion recognition within the office and faculties, social scoring, and predictive policing primarily based solely on profiling. This can impression how banks and fintechs use AI for buyer interactions, underwriting, and fraud detection.
Compliance and oversight
The ruling particularly calls out banking as an “important personal and public service” and categorizes it as a high-risk use of AI. Due to this fact, banks utilizing AI programs should assess and cut back dangers, preserve use logs, be clear and correct, and guarantee human oversight. The legislation states that residents have two main rights on the subject of the usage of AI of their banking platforms. First, they should have the flexibility to submit complaints, and second, they’ve the best to obtain explanations about choices made utilizing AI. This can require banks and fintechs to boost their threat administration and replace their compliance processes to accommodate for AI-driven providers.
Transparency
Banks utilizing AI programs and fashions for basic functions should meet transparency necessities. This consists of complying with EU copyright legislation and publishing detailed summaries of coaching content material. The transparency reporting won’t be one-size-fits-all. In keeping with the European Parliament’s clarification, “The extra highly effective basic goal AI fashions that would pose systemic dangers will face extra necessities, together with performing mannequin evaluations, assessing and mitigating systemic dangers, and reporting on incidents.”
Innovation help
The legislation stipulates that regulatory sandboxes and real-world testing shall be accessible on the nationwide degree to assist companies develop and practice AI use earlier than it goes reside. This might profit each fintechs and banks for help in testing and launching their new AI use circumstances.
Total, the EU AI Act isn’t requiring something outdoors of banks’ present capabilities. Monetary establishments have already got processes, documentation procedures, and controls in place to adjust to present laws. The act will, nonetheless, require banks and fintechs to both set up or reassess their AI methods, guarantee compliance with new laws, and adapt to a extra clear and accountable AI ecosystem.
Photograph by Tara Winstead