masthead

Tuesday, December 10, 2019

Why Consumers Deserve A Right To Machine Learning Explainability

By Kareem Saleh
August 12, 2019

Topics:
Kareem Saleh
ZestFinance
Machine Learning

KareemSalehKareem Saleh is executive vice president at ZestFinance, a software company that helps banks and lenders build, run and monitor fully explainable machine learning underwriting models. Previously, he spent several years in the Obama administration, where he served as Chief of Staff to the President's Special Envoy for Climate Change at the State Department and as Senior Advisor to the CEO of the Overseas Private Investment Corporation. Before entering the U.S. Government, he was deputy general counsel with Softcard (acquired by Google), a mobile wallet startup founded by AT&T, T-Mobile and Verizon. He's a graduate of the University of Chicago and Georgetown University Law Center.

As artificial intelligence and machine learning reshape a growing number of industries, regulators overseeing everything from medicine to mortgage lending are wrestling with a fundamental question: How do you explain to people the answers that come out of machine learning models?

In many areas, the debates have moved from theoretical to urgent almost overnight. Last year the U.S. Food and Drug Administration approved the first two artificial intelligence-based medical devices, one for detecting diabetic retinopathy, an eye disease, and another that can spot potential strokes in patients. Understanding why an ML model came up with a specific result could be a matter of life and death for these patients.

The problem is a serious one in lending, as a decision about a loan could keep someone from buying a house and moving up the economic ladder. We know that ML can help approve more people who are well-equipped to repay their loan but for whatever reason, a period of joblessness or a lack of credit, they don't fit into the traditional credit risk box. But banks are required to be able to explain why they approved (or rejected) a loan. If banks can't get that level of explainability from the ML, they may shy away from using the technology.

Regulatory agencies around the globe seem to be quietly converging on one core principle: Advances in machine learning must be matched by advances in machine explaining.

One of the earliest moves in this direction can be found in Europe's General Data Protection Regulation, which described citizens' "right to explanation" when an algorithm makes a decision that affects them. European lawmakers ultimately stopped short of enshrining such explanations as an explicit legal right but made clear the idea remains a fundamental goal.

A similar philosophy can be found on this side of the Atlantic at a growing array of regulatory agencies. In a report released this spring, the Office of the Comptroller of the Currency put it this way: "New technology and systems for evaluating and determining creditworthiness, such as machine learning, may add complexity while limiting transparency. Bank management should be able to explain and defend underwriting and modeling decisions."

In the old days of basic credit scores, simple algorithms followed basic recipes most anyone could quickly understand--toss in a handful important variables and mix them together with some middle-school math. No longer. In today's world, where AI systems can evaluate thousands of variables using cutting edge mathematics, the complexity can make the decisions both more accurate and more inscrutable.

To explain why it's so hard to let consumers peek into "black box" models, Zest's CEO Douglas Merrill put it this way in his recent testimony to Congress, "Many explainability techniques are inconsistent, inaccurate, computationally expensive, or fail to spot discriminatory outcomes."

The best techniques, by contrast, offer explanations that provide consumers with relevant information they can both understand and act upon. We're trying to build explainability into our software tools to make ML credit models transparent, providing users and regulators alike with a clear picture of how they work.

The key is to avoid a pitfall common in some ML systems: they simplify the underlying model to make it explainable--and in the process obscure the true picture of what is really driving the model's decisions. Our method, derived from game theory and multivariate calculus, avoids that problem, calculating each variable's relative importance to the final score by analyzing its interaction with other variables.

We think it's that kind of deep transparency that regulators will increasingly demand.

(Views expressed in this article do not necessarily reflect policy of the Mortgage Bankers Association, nor do they connote an MBA endorsement of a specific company, product or service. MBA Insights welcomes your submissions. Inquiries can be sent to Mike Sorohan, editor, at msorohan@mba.org; or Michael Tucker, editorial manager, at mtucker@mba.org.)

Share this article

Advertisement
Advertisement
Advertisement
Advertisement