Canadian Underwriter
News

The downside risk of machine learning


April 8, 2019   by Greg Meckbach


Print this page Share

Machine learning and artificial intelligence can create liability risk if it makes decisions that would be inappropriate or even illegal if a real person made them, a risk expert warns.

“If you train an algorithm with data that has underlying sexist or racist data, you may end up making a racist or sexist machine learning algorithm,” said Alex LaPlante, managing director of research at the Toronto-based Global Risk Institute, during a panel discussion at the recent Property and Casualty Insurers’ Risk Management Conference. “Sometimes a machine learning algorithm is the right way to go and maybe sometimes it’s not. I think we need to step back before we all jump on the band wagon.”

Organizations using machine learning need to take precautions to make sure they are not making discriminatory decisions that would put them offside the law, LaPlante suggested during an interview after the conference with Canadian Underwriter.

“There are real implications in terms of legal liability if you create an algorithm that has discriminatory behaviour,” she said. “Granted, that is sometimes difficult to prove. You as a consumer have prove that you were discriminated against by the algorithm.”

This could be an issue of a financial institution used machine learning that based decisions – such as credit risk – using postal codes.

“That could kind of lead  back to discrimination of a population if a minority group tends to reside in that area and you are not extending credit or extending products,” LaPlante told Canadian Underwriter.

AI is used for functions like pattern recognition, self-directed learning, problem-solving and decision-making, LaPlante wrote in Ethics & Artificial Intelligence in Finance, a white paper released April 1 by GRI.

“The idea underlying machine learning is you give a computer program access to volumes of data and let it learn about things. Let it learn about the relationships between variables,” University of Toronto Rotman School of Management professor John Hull said during the P&C Insurers’ risk management conference.

Machine learning has been around for more than 50 years but recent advances in computer processing power and the affordability of larger amounts of data storage have made machine learning more viable, Hull added.

“You probably interact with machine learning a lot more than you are aware of. Pretty much anything that’s Google based is machine learning,” LaPlante said during the conference, co-hosted by GRI and the Insurance Bureau of Canada and held April 4 at the Toronto Region Board of Trade.

Alex LaPlante, managing director of research, Global Risk Institute

AI poses real ethical concerns for financial institutions, as well as legal, regulatory, model and reputational risks, LaPlante wrote in her white paper.

One way of managing risk is to hire people who understand how the algorithms work.

“Make sure you are hiring the right people, with the underlying knowledge to understand what the algorithms do, when they are most appropriate, when they are not appropriate, when you may just want to go ahead with a traditional statistical model over a machine learning algorithm,” LaPlante said in an interview.

“I think a lot of consumers are unaware of how much their data and how much their interaction with these systems is then used to essentially manipulate them later” LaPlante said of social media and the Internet of Things during the conference.

“When you go home and you can’t find a show to watch on Netflix, that’s very annoying but it does not have real implications on your life. But now we are getting into the time where machine learning is being used for real decision making that has a vast impact on people’s lives.”


Print this page Share

1 Comment » for The downside risk of machine learning
  1. Bob says:

    Avoiding this problem is as simple as telling the algorithm to ignore sex/ethnicity, question is at what point do we do this? This is fine until you observe a clear difference between two groups, and the lower risk group wants to have the higher risk group held accountable. The reverse case is also possible, where a high risk group argues that it should not be labeled so (case in point the current issue in Ontario with using postal codes for auto insurance rates). Life insurance for example differentiates between men and women due to the marked variance in mortality rates and this seems to make sense to all of us. It definitely depends on the situation.

Have your say:

Your email address will not be published. Required fields are marked *

*