Inman

FairPlay lands $4.5M seed round to help lenders combat racial bias

Mortgage lenders who rely on artificial intelligence and algorithms risk running afoul of fair lending laws if they discriminate against minority groups, women or other protected borrowers — even if that bias is unintentional.

FairPlay, which today announced raising $4.5 million in seed funding, says its “Fairness-as-a-Service” solutions are already being used by lenders including Figure Technologies to help ensure fair lending compliance.

The New York-based startup offers two application programming interfaces (APIs), Fairness Analysis and Second Look, that are designed to help lenders discover bias in algorithms and take a second look at rejected applicants using “cutting edge AI fairness techniques.”

The bottom line is that “more applicants of color, women and other historically disadvantaged people are approved,” the company says.

Kareem Saleh

“FairPlay turns fairness into a business advantage, allowing our users to de-bias digital decisions in real-time and prove to their customers, regulators and the public that they’re taking strong steps to be fair,” FairPlay founder and CEO Kareem Saleh said in a statement.

The seed funding round, led by Third Prime Capital, with participation from FinVC, TTV, Financial Venture Studio, Amara, and Nevcaut Ventures, will allow FairPlay to build its team of machine learning engineers and data scientists.

A recent review of 2.4 million purchase loan applications by The Markup, a nonprofit newsroom, concluded that mortgage lenders were more likely to turn down homebuyers of color than white applicants with similar debt-to-income and loan-to-value ratios, and that algorithms were likely to blame.

But The Markup’s analysis, and other studies relying on Home Mortgage Disclosure Act (HMDA) data to show bias, have been challenged by the lending industry and others who say they don’t fully account for borrower risk. The American Enterprise Institute has disputed The Markup’s analysis, claiming that if credit scores are considered, racial disparities “disappear.”

To publicize its services, FairPlay today unveiled its “Mortgage Fairness Monitor” — a mapping tool that uses HMDA purchase loan data to show “how fair the mortgage market is to Black Americans, women and Hispanic Americans.”

Screen shot of a static version of FairPlay’s National Fairness Map provided to Inman. 

The “National Fairness Maps” FairPlay has created for different groups rely on adverse impact ratios (AIRs), to show how often members of a protected class are approved for purchase mortgage, compared to members of a control group. A disclaimer on a static fairness map provided to Inman notes that, “Adverse impact ratio is a measure of demographic parity. It does not control for risk.”

FairPlay used adverse impact ratios calculated from HMDA purchase loan data to calculate mortgage fairness scores for homebuyers in every county and major city in the U.S., and to rank the 50 largest purchase mortgage lenders.

In an email, Saleh said the map and the APIs “are distinct products.” While the map relies on HMDA data, “the APIs rely on lender-provided inputs, outputs and outcomes.”

FairPlay’s Fairness Analysis API, he said, “has the ability to apply any definition of fairness, including those that control for risk.”

The Second Look API “is all about controlling for risk,” he said. “It seeks to identify declined borrowers who would have performed as well as the riskiest borrowers that a lender currently approves. Because Second Look applies AI fairness techniques that are sensitive to protected status, the declined borrowers we are able to approve come from historically disadvantaged communities.”

“We initially built the Mortgage Fairness Monitor as an internal tool to understand the state of mortgage fairness nationwide, and when we saw what the maps revealed we realized that it was in the public interest to make this information available to everyone,” Saleh said.

Asked how useful county- and city-level data provided on the interactive map would be if there are also disparities at the neighborhood level, Saleh said FairPlay’s interactive map will compute adverse impact ratio “all the way down to the census tract level.”

He said FairPlay’s lender APIs compute AIR “with even more granularity — all the way down to the block group level, which includes subdivisions of census tracts.”

Saleh said FairPlay’s APIs “are live and are currently being tested by several lenders,” including Figure.

In a statement, Figure Chief Compliance Officer Rory Birmingham called FairPlay “an incredible resource” that’s “intuitive, easy to use and hugely beneficial for lenders like us who aim to be best-in-class on fairness and inclusion.”

Keith Hamlin, managing partner at FairPlay’s leading backer, Third Prime, said in a statement that “algorithmic bias presents a complex and persistent challenge that is rife with reputational and regulatory risk.” FairPlay’s solution, he said, “de-biases algorithmic underwriting and decision making, enabling its clients to navigate increasing regulatory scrutiny, rapidly iterate new and existing products, and actually deliver on the promise of fairness, inclusion and upward mobility for modern families and communities.”

Lenders who rely on artificial intelligence and algorithms are watching closely to see if federal regulators follow through on a proposal to rescind broad protections for lenders who unintentionally discriminate against minority borrowers.

In June, the Department of Housing and Urban Development announced its intention to reinstate rules put in place by the Obama administration in 2013 for addressing “disparate impact” — discriminatory practices that are unintentional, but nevertheless unjustified.

The Trump administration last year published a new disparate impact rule that would have made it harder for regulators, affordable housing groups, and private citizens to win discrimination claims under the Fair Housing Act.

Fair housing and civil rights groups won a preliminary court injunction to stop implementation of the Trump administration’s disparate impact rule, which hasn’t taken effect.

In one of 291 comments HUD received on reinstating the 2013 disparate impact rule, the Center for Responsible Lending said it gives lenders a “strong incentive to root out discriminatory practices while identifying more fair and accurate means of assessing creditworthiness, pricing mortgage products, and underwriting homeowners’ insurance.”

Today, the group said, lenders “often seek to identify the least discriminatory underwriting criteria that also are reliable indicators of risk. As more information is available on prospective borrowers, lenders assess the utility of new variables, and can then identify the handful of factors that, collectively, are sufficiently predictive of risk. The lender then tests that collection of variables for discriminatory impact. Because different factors often correlate, a lender can substitute a different criterion for one that through testing reveals a discriminatory impact and repeat the testing process. Through this process, lenders can isolate and eliminate those variables that cause unnecessary discriminatory impact, without compromising identification of credit risk.”

Get Inman’s Extra Credit Newsletter delivered right to your inbox. A weekly roundup of all the biggest news in the world of mortgages and closings delivered every Wednesday. Click here to subscribe.

Email Matt Carter