Policy Makers Examine the Disparate Impact Risk of Artificial Intelligence and Underwriting

Policy Makers Examine the Disparate Impact Risk of Artificial Intelligence and Underwriting

By Lexology

Policy makers are showing increasing concern in the fair lending implications of artificial intelligence (AI) in credit underwriting. The concerns surrounding AI echo views on credit scoring and automated underwriting systems when they were first deployed decades ago. Congress and regulatory agencies are likely to press financial services providers to better understand the AI they use and to ensure that its use does not result in unfair treatment of minorities and other protected groups through the employment of seemingly neutral criteria that have a disproportionate impact on protected classes of consumers.

Recent Congressional Hearing Highlights Risk From the “Black Box”

On June 26, the newly chartered Artificial Intelligence Task Force of the House Financial Services Committee held its first hearing, entitled “Perspectives on Artificial Intelligence: Where We Are and the Next Frontier in Financial Services.” Members of Congress and witnesses acknowledged AI’s potential to increase access to financial services, but they also aired concerns that AI could result in discrimination against minorities and other underserved groups.

Task Force Chair Rep. Bill Foster (D-IL) questioned how lawmakers could be sure that AI is not biased, given its “black box” nature. Ranking Member Rep. French Hill (R-AR) said that Congress needs to “ask the right questions” about AI so that it benefits consumers. According to witnesses, disparate impact discrimination could result from the interaction of neutral attributes; one witness noted that the use of AI for used car loans could have a disparate impact on African Americans given the interaction of a car’s higher mileage and the consumer’s state of residence. Although witnesses did not expressly call for new legislation, some called for more oversight of AI development and use.

Draft Legislation and Additional Hearings

Members of Congress have introduced legislation to address AI’s potential for discrimination, as well as the privacy risks associated with AI. Last April, Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) and Rep. Yvette Clarke (D-NY) introduced the Algorithmic Accountability Act, which would direct the FTC to issue rules addressing concerns about discrimination and privacy.

While legislation is unlikely to be enacted in the near term, Congressional interest in AI’s potential to increase access to credit and to adversely impact minorities is likely to continue. For example, the House Financial Services Committee’s Task Force on Financial Technology has already scheduled a hearing for July 25, entitled “Examining the Use of Alternative Data in Underwriting and Credit Scoring to Expand Access to Credit.”

Federal Agencies Are Likely to Increase Scrutiny of AI in Financial Services

As Congress begins its focus on AI, the Office of the Comptroller of the Currency (OCC) has already cautioned banks about fair lending risks from AI in the 2019 Semiannual Risk Perspective, stating that bank management must understand and monitor underwriting and pricing models for potential disparate impact and other fair lending issues and be able to explain and defend the models’ decisions. And on June 12, Senators Elizabeth Warren (D-MA) and Doug Jones (D-AL) asked the OCC, the Federal Deposit Insurance Corporation, the Consumer Financial Protection Bureau and the Federal Reserve Board to answer several questions about AI’s impact on consumers, including what steps each agency is taking to address potential discrimination and whether the agencies have analyzed the impact of “FinTech algorithms” on credit availability and price for minorities.

Financial service providers and their vendors should prepare for heightened attention from policy makers at both the federal and state levels. Model validation and fair lending testing may be increasingly critical for those who employ AI in their underwriting models.


Keep In Touch

Please sign up below to receive my weekly newsletter and get the latest news and updates directly to your inbox.