Aruna Joshi, Ex SVP, Model Risk Management, Bank of the West
Artificial Intelligence (AI) and Machine Learning (ML), once esoteric topics,are entering main street with full force.The benefits of AI/ML are unparalleled in making processes more economical and efficient. From cancer detection capabilities to autonomous vehicles, AI and ML is getting embedded in daily lives of ordinary citizens increasing the challenges in managing the risk associated with them. However just like any other innovation,potentially undesirable aspects cannot be completely eliminated. The example of a recruitment tool that was unintentionally discriminating against female applicants and a chatbot that became racist come to mind.
Due to the regulatory nature of the banking industry, many banks are being cautious to avoid any semblance of discriminatory actions resulting in a loss of reputation, not to mention heavy fines. Thus, most machine learning models are being used in low risk applications such as digital marketing. However, to take full advantage of AI/ML models, banks could enhance their model risk management framework. Model risk for a financial institution is defined as the possibility of incurring a financial loss, making incorrect business decisions, misstating external financial disclosures, or damaging the institution’s reputation.
To date, no specific guidance related to AI/ML has been issued by the regulatory agencies. SR 11-7, the guidance issued by the FRB on model risk management is principle based and hence is still applicable to all models.
Banks do face some challenges related to AI/ML but placing good governance and framework can mitigate the risks
According to a blog published by Kaspersky labs there are 9 challenges faced by ML models namely bad intentions, developer bias, parameters not always including ethics, ethical relativity, human behavior changing due to machine learning, false correlation, feedback loop, bad reference data, and trickery. From these, developer bias, false correlation and bad reference data are most applicable for models used in banks.
Developer bias can arise from situations when the developer’s main intention is to maximize profitability that results in creating algorithms that may give preferential treatment to certain demographics from product offerings to pricing. This may lead to a violation of Regulation B. This risk can be mitigated by the involvement of the Compliance group in the bank to ensure adherence to regulations.
False correlations can occur in machine learning algorithms such as random forest or gradient boosting since they depend on data with hundreds of variables. Without human input the model may select unintuitive and unrelated variables for the problem at hand. This risk can be mitigated by the involvement of all stakeholders during the development process. Additionally, benchmark models that are developed using traditional techniques canbe run in parallel to compare the model performance overtime.
Since AI and ML essentially learn from data, data quality is paramount. Most people are familiar with the concept of GIGO (garbage in garbage out). The algorithm is as good as the data. Even if the data used for developing the original algorithm may be sound, the model will be applied on data the model has not been exposed to. Hence monitoring of data changes should not be overlooked either.
In conclusion, banks do face some challenges related to AI/ML but placing good governance and framework can mitigate the risks.