Artificial Intelligence and Regulatory Compliance


Compliance to government regulations is important for the success of financial institutions and for the health of the economy. However, regulators have always been limited by the huge amount of data that they have to sift through to ensure that financial institutions are compliant. Financial institutions also have the burden of vetting their transactions to avoid getting into trouble with regulatory bodies. The use of AI to carry out these actions is fast gaining recognition in the financial industry as an efficient way of ensuring compliance.

Role of AI in Regulatory Compliance

An interview of 424 senior financial executives by Euromoney revealed that these financial institutions see AI as a useful tool for financial regulation as it is relatively unbiased and offers insight into various use cases and scenarios. 

CFTC chair Christopher Giancarlo said in 2018,

"The ability to digitize rule-sets and consume, process, and analyze data in real-time could very well be the capability that allows us to explore application of so-called ‘agile regulation.’

As machines assume more economic tasks and functions, we would expect that these machines can be programmed with rules that ensure compliance with laws and regulations."

— CFTC Chair Christopher Giancarlo, Quantitative Regulation: Effective Market Regulation in a Digital Era, November 7, 2018

Some of the applications of AI in ensuring regulatory compliance include:

Detecting fraudulent activity: Regulatory bodies can use AI to flag suspicious activity carried out in financial institutions. Identity fraud, credit card fraud, etc can easily be detected using AI, and regulatory bodies can assess the response of the institutions to these alerts. A notable example is the $800 billion Paycheck Protection Program launched by the US Congress in 2020 to pad the impact of the COVID-19 pandemic on small businesses. However, about 15% of the loan applications received were said to have contained evidence of fraud, mostly identity fraud, after lengthy investigations. Banks are required to have transaction verification checkpoints for identity in order to prevent fraud. The use of AI by regulatory bodies can be used to monitor the number of these fraudulent transactions and assess the level of compliance of the financial institutions.

Monitoring compliance to Anti-Money Laundering (AML) regulations: By law, banks are required to maintain anti-money laundering systems to detect and report suspicious movement of money for drugs trafficking, human trafficking, and other crimes. However, compliance to this policy is monitored manually and this is quite ineffective. AI-based compliance systems would be more effective in flagging suspicious and curbing these heinous crimes. 

Monitoring fair lending and equal credit compliance: This can be better illustrated with an example from a global leader in the AI race - the USA.

The Fair Housing Act, Equal Credit Opportunity Act, Housing and Community Development Act were all passed to prevent financial discrimination especially against minority groups. However, in the early stages of the adoption of these acts, there were still significant disparities and discrimination against these protected groups because if the subjective process involved. As a result, many Americans were still unable to benefit from these acts until the use of AI for application and verification processes began to gain popularity. The use of algorithms such as credit scores were also created to ensure a more objective way of assessing eligibility for credit access. 

Challenges associated with the adoption of AI for regulatory compliance 

Bias: AI is amoral and without ethical and legal constraints, the technology could increase bias in the bid to increase efficiency.

"AI allows massive amounts of data to be analyzed very quickly. As a result, it could yield credit scoring policies that can handle a broader range of credit inputs, lowering the cost of assessing credit risks for certain individuals, and increasing the number of individuals for whom firms can measure credit risk…

When using machine learning to assign credit scores make credit decisions, it is generally more difficult to provide consumers, auditors, and supervisors with an explanation of a credit score and resulting credit decision if challenged. 

Additionally, some argue that the use of new alternative data sources, such as online behavior or non-traditional financial information, could introduce bias into the credit decision."

— Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, November 2017

Data Privacy: Creating boundaries for what data AI can process to make decisions may be difficult. Measures also need to be put in place to eliminate security risks and risks of government misuse of data.

In conclusion, many financial regulators are still behind on the use of AI to monitor compliance although some of these agencies are starting to adopt innovation programs eg the US Securities and Exchange Commission, Financial Industry Regulatory Authority, etc. More work needs to be done to improve acceptance and adoption in other agencies.


Comment  0

No comments.