The FTC takes steps to regulate AI abuses — but will it be effective?

Neil Raden
5 min readMar 20, 2022

--

SUMMARY: A new FTC mandate is intended to crack down on AI abuses. How will it work? Are crucial areas of AI abuse overlooked? Here’s my first cut analysis.

On April 19, 2021, in a blog titled, Aiming for truth, fairness, and equity in your company’s use of AI, Elisa Jillson of the FTC, in an atypical fashion, put U.S. organizations on notice that the FTC would use its various powers to investigate AI abuses and presumably take action: “Hold yourself accountable — or be ready for the FTC to do it for you.”

Let’s first examine the breadth of the FTC’s mandate for enforcement:

  • Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of — for example — racially biased algorithms.
  • Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm denies people employment, housing, credit, insurance, or other benefits.
  • Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination based on race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.

But there are limitations to the FTC’s oversight. It cannot regulate insurance companies — The McCarran- Ferguson Act of 1945 is a federal law that exempts insurance businesses from most federal regulations, including federal antitrust laws, to a limited extent. Insurance companies are not immune to enforcement of violations of the above three sections. Nor can the FTC regulate banks or federal credit unions.

What can the FTC do? The principal mission is to enforce civil U.S. antitrust law and promote consumer protection. But in our topsy-turvy government, just this week, the Supreme Court issued a ruling that will significantly limit the Federal Trade Commission’s ability to extract monetary relief for consumers when companies are found to use deceptive practices. That takes the teeth out of their recent broadside about AI, though the FTC can still seek “a permanent injunction” pending administrative proceedings.

The document lists all of the usual suspects of AI abuse, but in my view, it neglects to cover the most insidious and dangerous ones. Highlighted in the article is:

Discrimination by race or other legally protected classes. For example, COVID-19 prediction models can help health systems combat the virus by efficiently allocating ICU beds, ventilators, and other resources. But as a recent study in the Journal of the American Medical Informatics Association suggests, if those models use data that reflect existing racial bias in healthcare delivery, AI that was meant to benefit all patients may worsen healthcare disparities for people of color.

Another point raised was “algorithms developed for benign purposes like healthcare resource allocation and advertising resulted in racial bias.” This is a monumental understatement. The content of training data for AI models that either doesn’t represent the population in the study and/or is biased against those protected classes is so well-understood it almost doesn’t merit comment.

The algorithm used hospital expenditure per patient and allocated more dramatic care for those who cost more, leaving the working poor sent home with insufficient care. In another case, a university professor developed an AI-driven program to assign benefits to the state’s Medicaid and disability programs. The program was sold to dozens of states. For some reason, the logic in the program was biased against people with diabetes and palsy. One woman who sued had had 24-hour in-home care reduced to just 10 hours per week. When contacted about his lousy program and asked if he should correct it, he replied, “Yeah, I also should probably dust under my bed,” while still collecting royalties.

“Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence.” The problem is that many purveyors of AI don’t know if their models are fair or not. Most have not done rigorous fairness testing because explainability is still an immature science. An analogous situation is when the models are so complex that the engineers have no idea if they are behaving well or not. And then, there are the careless, callous developers say, “It’s only math,” or “that’s the way we do it.”

To wrap it up, there was this admonishment: “To put it in the simplest terms, under the FTC Act, a practice is unfair if it causes more harm than good.”

There are massive problems with AI that were not even covered in this document:

Data brokers: There are thousands of unregulated and often unscrupulous data brokers who gather personal information and sell it to loan officers, mortgage lenders, insurance companies, consumer products, services that screen candidates for employers. Much of this information is untraceable. It even includes ridiculous items like a “smoker propensity model.” There are also legitimate data brokers whose operations are so ironclad that if you find you’ve had a chargeable auto accident that never occurred, prepare to spend days trying to correct it.

Surveillance: When you have a conversation with someone about running shoes, and your iPhone pops up an ad for a nice pair of Nikes, your privacy is being invaded. During the COVID pandemic, several firms analyze your image on cameras to detect your temperature. Digital phenotyping or redlining creates a model of you that determines what is presented to you, whether ads or news.

Disinformation: Why the FTC would not address this severe problem escapes me. AI is used to engage you for ad revenue but creating content that is meant to persuade you rather than inform you and keep you engaged. This may be the greatest threat of AI to civil society.

Subsequential: When an AI model is clean, meaning it doesn’t violate any of the ethical rules, it can still be used to set up a subsequent event precisely the opposite. Were they using AI to get a message out to a selected group of people for a demonstration to exercise their constitutional rights to oppose an election? There was nothing wrong with that, except it was a predetermined smokescreen to encourage the crowd to riot once they got there.

Conclusion

It is encouraging to see the FTC take steps to make companies accountable for AI abuses. However, when you consider the significant areas of AI abuse the FTC overlooked and the reduction in their enforcement powers, we shouldn’t take false comfort. There is a long way to go here.

--

--

Neil Raden

Consultant, Mathematician, Author; focused on iAnalytics Applied AI Ethics and Data Architecture