Our fourth episode, “Artificial Intelligence and Bias”, features guest speakers Mutale Nkonde, Todd Marlin, and Avi Gesser.  I asked our guests for one “actionable, monetizable takeaway” for our listeners.  Todd said:


“I’ll comment on the one monetizable takeaway and one comment on where I think it’s going.  To me the thesis here is the risk is just as important to analyze as the value here.  There’s a lot of focus on rushing to create value, whether it’s the automation of decisions, a calculation, a critical process to get some sort of advantage.  And that’s very important.  But at the same time, organizations need to take a deep look at the risk side of the equation to see who might be negatively affected.  How information was created, taking a step back, and it comes down to documentation.  Do you know where your AI is in use, and what for?  How it was designed?  How it was tested?  How it’s documented?  Are you routinely revisiting it?  And what frameworks are you using to evaluate this?  Is it one of the 150 odd voluntary frameworks?  And finally, I’d say that, at the end of the day, this is not just engineers, you need to have a cross-functional team here, which is about combining technical know-how with legal cybersecurity and privacy expertise to properly manage these risks.  Where do I think this is going?  I think that this will be something that will need to be in most companies’ financial statements as risks, and Google and Microsoft have already led the way.  They were the first to publicly disclose the risk of AI mis-performance as a note to their financial statements.  And I believe that that is going to be a bellwether for other organizations to begin doing that, because the risks are so significant.”


Interested in hearing more?  Check out Episode 4 here (available September 23, 2021).