In the fight against financial economic crime, the traditional rule-based approach to Know Your Customer (KYC) processes has proven to be both resource-intensive and inefficient in its use of human capital as well as its supporting IT. The sector is moving towards a more risk-based approach, one that enables smarter allocation of human and technological resources. The use of intelligent, data-driven innovations will further assist the transition towards a more risk-based model. With a structural focus on these innovations, the industry will become more efficient in detecting and ultimately fighting financial crime. We believe that a central role for AI Governance is one of the success factors enabling these innovations.
AI as a catalyst for risk-based working
Artificial Intelligence (AI) offers a range of opportunities to enhance the effectiveness and efficiency of risk-based working in and outside of KYC. The rule-based approach is formed on the assumption that the customer population is homogeneous, with fixed rules to assign risks. These fixed rules are limited in effectiveness, with too many false positives or too few true positives as a result.
By leveraging advanced analytics and machine learning, AI can help analyse vast amounts of data and a wide range of variables. This allows AI to uncover complex hidden patterns and detect anomalies, which can be particularly advantageous in a risk-based approach within Customer Due Diligence (CDD), Transaction Monitoring (TM), as well as optimising a bank’s SIRA.
In the context of CDD, AI can facilitate a more precise segmentation of the customer population. Looking forward, it holds the potential for dynamic monitoring and revision of client risk profiles, which lessens the reliance on periodic manual evaluations. Additionally, AI’s strength in recognising patterns and anomalies makes it well-suited for Transaction Monitoring (TM). It can increase the true positive rate by identifying potential criminal activities that static rules would overlook, and it can lower the false positive rate due to its enhanced accuracy. However, letting AI run free on these processes will result in biases, unexplainable results, and potential reputational damage.
Navigating regulatory landscapes
Considering the potential benefits of AI, it makes sense that organisations want to utilise these possibilities. Nevertheless, the financial sector needs to consider that using AI comes with regulatory challenges. Since the financial crisis, there has been a zero-tolerance approach in the compliance domain, resulting in risk-averse advice and interpretations of regulatory guidance. This results in rule-based working to mitigate the risk of error. The regulatory environment is evolving to address the complexities introduced by AI. For instance, the EU AI Act categorises AI applications based on risk levels and imposes stringent requirements on high-risk systems, including those used in financial services. To align with these regulatory requirements, compliance will necessitate:
- Audit trails & monitoring to maintain detailed records of AI system design, data sources, and decision-making processes.
- Risk assessments to identify and mitigate potential risks associated with AI applications.
- Human oversight to ensure that human judgment remains integral to decision-making, particularly in high-stakes scenarios.
With AI, there is a huge promise of efficiency gains in data-driven organisations like banks. Strong AI Governance within the organisation helps to leverage AI for effective KYC processes and facilitates the integration of AI within the three lines of defence model of financial institutions.
The importance of AI governance within FEC prevention
In areas like CDD and TM, where decisions have an immediate impact on the customers and sensitive data is being processed, oversight is critical. Certain risks around data privacy, biased outcomes, and lack of transparency require specific mitigation, and having a clear governance around the use of AI can greatly help here, while additionally benefiting the adoption and trust in the use of AI for both the company and its customers.
Let’s look specifically at TM. The models that detect unusual behaviour need to be auditable. If a customer is flagged for suspicious activity, financial institutions must be able to explain why; to refrain from discriminatory actions, biased decision making and to mitigate unnecessary obstacles for their customers. To support their findings, evidence is necessary. This illustrates the importance of having the right governance in place, as this will ensure that the use of AI is implemented in the correct way and thus not only efficient but also reliable and effective. The lack of governance introduces new risks that can hinder adoption, damage trust and increase regulatory exposure.
AI Governance provides the framework and structure needed to manage these risks. It ensures that AI systems are managed fairly, reliably, and ethically. Establishing policies, guidelines for accountability, model traceability and implementing monitoring and auditing controls are aspects of how AI Governance allows companies to use AI confidently and responsibly. This includes setting clear criteria for data usage, tracking model performance over time, and ensuring human oversight is built into the process. Without this in place, moving from pilot to production can be slow and uncertain. Compliance and risk teams often block deployment due to concerns about explainability, accountability, and legal compliance. AI Governance enables the structure needed to address these concerns upfront, so organisations can scale AI with confidence. It provides the necessary framework to balance innovation with accountability, ensuring that AI acts as an enabler rather than a liability.
Valcon’s role in implementing AI responsibly while transitioning from a rule-based to a risk-based way of working
As a trusted advisor, we work together with financial institutions to navigate their transition from rule-based to risk-based KYC models. Our expertise encompasses:
- AI integration: Assisting in the deployment and validation of AI technologies that enhance efficiency and accuracy in compliance processes.
- Governance frameworks: Developing and implementing AI Governance structures that align with regulatory requirements and ethical standards necessary to align with risk management and compliance functions that are part of the second line of defence within financial organisations.
- Organisational transformation: Guiding the organisation through change and improving the all-round adoption of new ways of working with AI in the first line of defence within the operation.
- Sector-specific knowledge: Leveraging deep industry insights to tailor solutions that address the unique challenges of financial crime prevention.
Next to our deep sector knowledge, we believe that combining the promise of AI with our uniquely positioned data capabilities, technology skills and organisational transformation expertise, Valcon will assist the financial sector in moving to a more risk-based model for their KYC processes, whilst maintaining trust and integrity. We help you move faster, safer, and with more confidence.
Want to know more?
If you would like to speak to Valcon about how AI and governance enable smarter FinCrime prevention, please get in touch with: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] or [email protected]













