September 17, 2019 by Jason Contant
Financial institutions looking to adopt and develop best practices on the responsible use of artificial intelligence (AI) should focus on three areas: explainability, bias and diversity, experts say.
A new TD Bank Group survey of 1,200 Canadians found that a majority of Canadians (72%) are comfortable with companies using AI if it means they’ll receive better and more personalized service, but 68% admit that they don’t understand the technology well enough to know the risks. In addition to surveying Canadians about their attitudes toward AI, TD also engaged a cross-section of experts – from financial services, technology, fintech, academia, and public and not-for-profit organizations – to participate in a roundtable discussion to better understand the risks associated with AI in financial services.
The findings were presented in the report Responsible AI in Financial Services, released at an Economic Club of Canada event Sept. 12.
The report identified three areas of focus as financial institutions look to the future evolution of AI:
The roundtable analyzed future-state scenarios that presented instances where AI resulted in unintended consequences for customers. It found that communication barriers that may exist between executives and engineers, or between companies and customers, can be linked back to ‘explainability’ in AI – or how an AI system arrived at a conclusion.
Recommendations included that when addressing explainability, companies should implement processes and standards that evolve alongside their models and continuously test for inconsistencies. Experts also said that technologists, government and business leaders need to come to a “clear and agreed upon understanding of the technical capabilities and limitations of AI models so that realistic expectations can be set around explainability, transparency and accountability.”
For bias, experts noted that it can mean different things in different contexts and to different people. Generally, the concern stems from human bias, which can lead to unfair treatment or discrimination, but also statistical bias, which can be useful in an AI model. “The roundtable participants noted that when one characteristic – such as gender, age or ethnicity – is removed from data to eliminate biased outcomes, machine learning models will often create proxies for that same characteristic.”
Lastly, roundtable participants reflected on diversity and inclusion as it relates to the adoption and implementation of AI. The following areas were identified as critical for organizations to consider:
When asked for their views on factors that are important when it comes to how companies use AI, those polled cited the following: control over how their data is used (70%), transparency about the use of the technology (55%), and that decisions made using AI are easy to explain and understand (28%).
Have your say: