news

U.S. Regulators Warn They Already Have the Power to Go After A.I. Bias — and They're Ready to Use It

Al Drago | Bloomberg | Getty Images

Lina Khan, chair of the Federal Trade Commission (FTC), speaks during the Spring Enforcers Summit at the Department of Justice in Washington, DC, on Monday, March 27, 2023.

  • Four federal U.S. agencies issued a warning on Tuesday that they already have the authority to tackle harms caused by artificial intelligence bias and they plan to use it.
  • In a joint announcement from the Consumer Financial Protection Bureau, Department of Justice, Equal Employment Opportunity Commission and Federal Trade Commission, regulators laid out some of the ways existing laws would allow them to take action against companies for their use of AI.
  • Still, regulators acknowledged there's room for Congress to act.

Four federal U.S. agencies issued a warning on Tuesday that they already have the authority to tackle harms caused by artificial intelligence bias and they plan to use it.

The warning comes as Congress is grappling with how it should take action to protect Americans from potential risks stemming from AI. The urgency behind that push has increased as the technology has rapidly advanced with tools that are readily accessible to consumers, like OpenAI's chatbot ChatGPT. Earlier this month, Senate Majority Leader Chuck Schumer, D-N.Y., announced his work on a broad framework for AI legislation, indicating it's an important priority in Congress.

But even as lawmakers attempt to write targeted rules for the new technology, regulators asserted that they already have the tools to pursue companies abusing or misusing AI in a variety of ways.

In a joint announcement from the Consumer Financial Protection Bureau, Department of Justice, Equal Employement Opportunity Commission and Federal Trade Commission, regulators laid out some of the ways existing laws would allow them to take action against companies for their use of AI.

For example, the CFPB is looking into so-called digital redlining, or housing discrimination that results from bias in lending or home valuation algorithms, according to Director Rohit Chopra. The agency also plans to propose rules to ensure AI valuation models for residential real estate have safeguards against discrimination.

"There is not an exemption in our nation's civil rights laws for new technologies and artificial intelligence that engages in unlawful discrimination," Chopra told reporters during a virtual press conference Tuesday.

"Each agency here today has legal authorities to readily combat AI-driven harm," FTC Chair Lina Khan said. "Firms should be on notice that systems that bolster fraud or perpetuate unlawful bias can violate the FTC Act. There is no AI exemption to the laws on the books."

Khan added the FTC stands ready to hold companies accountable for their claims of what their AI technology can do, adding that enforcing against deceptive marketing has long been part of the agency's expertise.

The FTC is also prepared to take action against companies that unlawfully seek to block new entrants to AI markets, Khan said.

"A handful of powerful firms today control the necessary raw materials, not only the vast stores of data but also the cloud services and computing power, that startups and other businesses rely on to develop and deploy AI products," Khan said. "And this control could create the opportunity for firms to engage in unfair methods of competition."

Kristen Clarke, assistant attorney general for the DOJ Civil Rights Division, pointed to the agency's prior settlement with Meta over allegations that the company had used algorithms that unlawfully discriminated on the basis of sex and race in displaying housing ads.

"The Civil Rights Division is committed to using federal civil rights laws to hold companies accountable when they use artificial intelligence in ways that prove discriminatory," Clarke said.

EEOC Chair Charlotte Burrows pointed to the use of AI for hiring and recruitment, noting that it can result in biased decisions if trained on biased datasets. That might look like screening out all candidates who don't look like those in the select group the AI was trained to identify, for example.

Still, regulators acknowledged there's room for Congress to act.

"I do believe that there is there it's important for Congress to be looking at this," Burrows said. "I don't want in any way the fact that I think we have pretty robust tools for some of the problems that we're seeing to in any way undermine those important conversations and the thought that we need to do more as well."

"Artificial intelligence poses some of the greatest modern day threats when it comes to discrimination today and these issues warrant closer study and examination by policymakers and others," said Clarke, adding that in the meantime agencies have "an arsenal of bedrock civil rights laws" to "hold bad actors accountable."

"While we continue with enforcement on the agency side, we've welcomed work that others might do to figure out how we can ensure that we are keeping up with the escalating threats that we see today," Clarke said.

Subscribe to CNBC on YouTube.

WATCH: Can China's ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

Copyright CNBC
Exit mobile version