File Download
Supplementary

Others: Artificial Intelligence in Finance: Putting the Human in the Loop

TitleArtificial Intelligence in Finance: Putting the Human in the Loop
Authors
KeywordsFintech
Regtech
Artificial intelligence
Human in the loop
Financial regulation
Issue Date2020
Citation
Zetzsche, Dirk Andreas and Arner, Douglas W. and Buckley, Ross P. and Tang, Brian, Artificial Intelligence in Finance: Putting the Human in the Loop (February 1, 2020). Available at SSRN: https://ssrn.com/abstract=3531711 How to Cite?
AbstractFinance has become one of the most globalized and digitized sectors of the economy. It is also one of the most regulated of sectors, especially since the 2008 Global Financial Crisis. Globalization, digitization and money are propelling AI in finance forward at an ever increasing pace. This paper develops a regulatory roadmap for understanding and addressing the increasing role of AI in finance, focusing on human responsibility: the idea of “putting the human in the loop” in order in particular to address “black box” issues. Part I maps the various use-cases of AI in finance, highlighting why AI has developed so rapidly in finance and is set to continue to do so. Part II then highlights the range of the potential issues which may arise as a result of the growth of AI in finance. Part III considers the regulatory challenges of AI in the context of financial services and the tools available to address them, and Part IV highlights the necessity of human involvement. We find that the use of AI in finance comes with three regulatory challenges: (1) AI increases information asymmetries regarding the capabilities and effects of algorithms between users, developers, regulators and consumers; (2) AI enhances data dependencies as different day’s data sources may may alter operations, effects and impact; and (3) AI enhances interdependency, in that systems can interact with unexpected consequences, enhancing or diminishing effectiveness, impact and explainability. These issues are often summarized as the “black box” problem: no one understands how some AI operates or why it has done what it has done, rendering accountability impossible. Even if regulatory authorities possessed unlimited resources and expertise – which they clearly do not – regulating the impact of AI by traditional means is challenging. To address this challenge, we argue for strengthening the internal governance of regulated financial market participants through external regulation. Part IV thus suggests that the most effective path forward involves regulatory approaches which bring the human into the loop, enhancing internal governance through external regulation. In the context of finance, the post-Crisis focus on personal and managerial responsibility systems provide a unique and important external framework to enhance internal responsibility in the context of AI, by putting a human in the loop through regulatory responsibility, augmented in some cases with AI review panels. This approach – AI-tailored manager responsibility frameworks, augmented in some cases by independent AI review committees, as enhancements to the traditional three lines of defence – is in our view likely to be the most effective means for addressing AI-related issues not only in finance – particularly “black box” problems – but potentially in any regulated industry.
DescriptionCFTE Academic Paper Series: Centre for Finance, Technology and Entrepreneurship, no. 1.
Persistent Identifierhttp://hdl.handle.net/10722/281749
SSRN

 

DC FieldValueLanguage
dc.contributor.authorZetzsche, DA-
dc.contributor.authorArner, DW-
dc.contributor.authorBuckley, RP-
dc.contributor.authorTang, B-
dc.date.accessioned2020-03-24T09:07:02Z-
dc.date.available2020-03-24T09:07:02Z-
dc.date.issued2020-
dc.identifier.citationZetzsche, Dirk Andreas and Arner, Douglas W. and Buckley, Ross P. and Tang, Brian, Artificial Intelligence in Finance: Putting the Human in the Loop (February 1, 2020). Available at SSRN: https://ssrn.com/abstract=3531711-
dc.identifier.urihttp://hdl.handle.net/10722/281749-
dc.descriptionCFTE Academic Paper Series: Centre for Finance, Technology and Entrepreneurship, no. 1.-
dc.description.abstractFinance has become one of the most globalized and digitized sectors of the economy. It is also one of the most regulated of sectors, especially since the 2008 Global Financial Crisis. Globalization, digitization and money are propelling AI in finance forward at an ever increasing pace. This paper develops a regulatory roadmap for understanding and addressing the increasing role of AI in finance, focusing on human responsibility: the idea of “putting the human in the loop” in order in particular to address “black box” issues. Part I maps the various use-cases of AI in finance, highlighting why AI has developed so rapidly in finance and is set to continue to do so. Part II then highlights the range of the potential issues which may arise as a result of the growth of AI in finance. Part III considers the regulatory challenges of AI in the context of financial services and the tools available to address them, and Part IV highlights the necessity of human involvement. We find that the use of AI in finance comes with three regulatory challenges: (1) AI increases information asymmetries regarding the capabilities and effects of algorithms between users, developers, regulators and consumers; (2) AI enhances data dependencies as different day’s data sources may may alter operations, effects and impact; and (3) AI enhances interdependency, in that systems can interact with unexpected consequences, enhancing or diminishing effectiveness, impact and explainability. These issues are often summarized as the “black box” problem: no one understands how some AI operates or why it has done what it has done, rendering accountability impossible. Even if regulatory authorities possessed unlimited resources and expertise – which they clearly do not – regulating the impact of AI by traditional means is challenging. To address this challenge, we argue for strengthening the internal governance of regulated financial market participants through external regulation. Part IV thus suggests that the most effective path forward involves regulatory approaches which bring the human into the loop, enhancing internal governance through external regulation. In the context of finance, the post-Crisis focus on personal and managerial responsibility systems provide a unique and important external framework to enhance internal responsibility in the context of AI, by putting a human in the loop through regulatory responsibility, augmented in some cases with AI review panels. This approach – AI-tailored manager responsibility frameworks, augmented in some cases by independent AI review committees, as enhancements to the traditional three lines of defence – is in our view likely to be the most effective means for addressing AI-related issues not only in finance – particularly “black box” problems – but potentially in any regulated industry.-
dc.languageeng-
dc.subjectFintech-
dc.subjectRegtech-
dc.subjectArtificial intelligence-
dc.subjectHuman in the loop-
dc.subjectFinancial regulation-
dc.titleArtificial Intelligence in Finance: Putting the Human in the Loop-
dc.typeOthers-
dc.identifier.emailArner, DW: douglas.arner@hku.hk-
dc.identifier.emailTang, B: bwtang@hku.hk-
dc.identifier.authorityArner, DW=rp01237-
dc.description.naturepublished_or_final_version-
dc.identifier.ssrn3531711-
dc.identifier.hkulrp2020/006-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats