Our fourth episode, “Artificial Intelligence and Bias”, features guest speakers Mutale Nkonde, Todd Marlin, and Avi Gesser.  I asked our guests for one “actionable, monetizable takeaway” for our listeners.  Avi said:

 

“I think I’ll cheat a little bit and say that the actionable takeaway is that this area is going to get a lot more scrutiny over time, as people realize that they have decisions that are being made by machines for them about whether they get hired, whether they get a loan, whether they get insurance, all sorts of things.  There’s going to be more scrutiny.  They’re going to want to understand how that decision was made, and whether they have an appeal right.  And regulators are going to care a lot more about the governance.  So to anticipate those kinds of challenges and prepare for them, a lot of either companies or governments are going to have to have very good internal processes in place.  That will include accountability, who’s responsible for the AI program?  Approvals, what approvals were made?  What appeal rights were given for people who were negatively impacted by the AI decision?  What’s the documentation on that?  Explainability, do you understand exactly how the AI worked?  What were the inputs and what caused a particular result to go one way or the other?  Oversight of that.  An inventory of all your AI models, and especially the high risk models.  Is there ongoing monitoring?  These AI systems are self-learning so they can drift.  So even if they work perfectly in March, they may not be working the way you think they are by October.  What are the policies and procedures?  What’s the training for the people who are building the models and operating the models?  Do they understand these risks?  What are they doing to make sure that these risks aren’t materializing?  What’s the transparency?  Do people know that they are being negatively affected or could be negatively affected by a model?  Do they understand that there may not be a human in the loop in these decisions?  To the extent you’re using AI that’s prepared or operated by third parties, what’s the diligence that was done on that?  I don’t think you’re going to be able to say, ‘Oh, well, that’s our vendor, sorry, we don’t really have much insight into what they’re doing.’  I think if it’s your customers who are being affected or you’re the one operating the AI, the fact that you’ve got the AI from a vendor is going to put a burden on you to make sure that they’re doing what you would expect you to do if you were operating the AI.”

 

Interested in hearing more?  Check out Episode 4 here (available September 23, 2021).