Statistical bias in context — AI didn’t invent quantitative methods of bias

Neil Raden
7 min readSep 2, 2021

SUMMARY: An effective approach to AI Ethics must reckon with bias, algorithmic discrimination, and privacy. These terms have a historical context that should be understood — if we want to deploy AI ethically. This time around, we delve into bias.

In the expanding universe of AI Ethics discourse, the three most prominent concepts discussed are bias, discrimination, and privacy. Bias is the pre-condition that leads to discrimination, so we can say there are just two.

This article is about bias and deliberate, systematic use in quantitative systems that pre-date AI. This is part one that examines the history and politics of quantitative methods deliberately used to foster bias. In Part 2, the long history of the “Right to Privacy” and how the lack of it continues to encourage discrimination.

The manipulation of numbers and functions today is a continuation of a long-standing practice of at least 120 years. Statisticians of the highest order deliberately used their toolkit to misinform and obfuscate, especially about minorities. Mark Twain, in 1907, paraphrasing British Prime Minister Benjamin Disraeli, “There are lies, damned lies and statistics.”

There is nothing new in AI Ethics. So why do we pretend there is? Bias in AI is just a repeated symptom going back almost 150 years where statisticians used their cooked-up data to justify everything from slavery to Jim Crow, immigration, intelligence testing, to the oppression of women. And it’s still happening.

Here’s a disconcerting historical note on statistical bias: in 1890, the Prudential Insurance Company cut the death benefits (but not the premiums) for “Negroes” by 1/3, backed by rigorous statistical modeling. In May 1896, Frederick L. Hoffman, their chief statistician/actuary, published a 330-page article in the prestigious Publications of the American Economic Association intended to prove-with statistical reliability-that the American Negro was uninsurable and could never rise above his station. It was his genetic destiny. All of this he proved with meticulous statistical methods. More than proving the uninsurability of the American Negro, Hoffman’s publication demonstrated the social and economic power of statistical methods. With ease, they could be mobilized to buttress the interests and practices of private industry and white supremacy. See Frederick Hoffman’s Race Traits: The Origins of Statistical Bias Towards Black People.

Today, attempting to find solutions by imbuing organizations with ethical principles is not the answer (Enron, arguably the most unethical company in corporate history, had 75-page ethics manual). It is not AI itself that is flawed. It is us. And what does the burgeoning AI Ethics community do? We wring our hands. From a previous AI ethics article, I wrote:

Ethics can’t be taught in a class or a seminar, or even in a book. You learn ethics as you mature. Or you don’t. How much is there to know when it comes to developing systems in the social context? What is there to think about? Don’t unfairly discriminate, don’t engage in disinformation, don’t invade people’s privacy, don’t conduct unfair computational classification and surveillance. Just don’t do this stuff. But it’s gotten obvious that thinking about ethics is not the solution.

Particularly egregious use of deliberately false statistics is how insurance companies use creditworthiness in accepting or rating personal lines insurance (life, auto and homeowners) except in Hawaii, California, Massachusetts, and Michigan. As I’ve argued, it is a bold and disgraceful and deliberate racist technique.

Insurance companies use FICO scores to various studies conducted by consulting firms that indicate a clear relationship between a poor credit score and higher risk. There is a term for this in the new field of AI Ethics: fairwashing, insisting on a false explanation that a machine learning model respects some ethical values when it doesn’t. African Americans pay higher premiums for this coverage, particularly auto, which is mandatory and essentially a regressive tax. The excuses given are, incredibly, “lower FICO shows that they can’t manage money,” or “they may lack means to maintain their vehicle giving rise to fraudulent theft.” There is no statistical model for this fairwashing. The working poor has low FICO scores because they don’t have enough money, not that they’re crooks. Where middle-class people can weather a crisis or two, the minor incident can knock the working poor off the rails.

And this is from an NPR article on residential mortgages: redlining, the historical practice of banks marking off neighborhoods where they would not offer residential mortgages. Something called the Underwriting Manual of the Federal Housing Administration, which said that ‘incompatible racial groups should not be permitted to live in the same communities.’

Meaning that loans to African-Americans could not be insured. Redlining has been outlawed for decades, but its effects can still be seen in the segregated neighborhoods, where African-Americans have never enjoyed the wealth appreciation of owning a residence in a “good” neighborhood.

The suburban home was seen as a realm where privacy could flourish. White flight to the suburbs was promoted by government institutions, especially the FHA, which devised lending practices based on complex statistical models to sustain segregation through the funding of suburbs of affordable housing. Applicants for mortgages were evaluated for loans based on race, job instability, failed marriage‚ “on the excuse of foreclosure or financial risk, but effectively as a means of racial segregation.” An often overlooked requirement was that families with working wives were disqualified.

It is illegal to evaluate credit, education, or employment applications based on race or gender, age, or religion. Racism in our healthcare system is truly systemic. Long-standing myths that African-Americans have thicker skin ad tolerate pain more easily than their white counterparts are patently false. For more, see Millions of black people affected by racial bias in health.

In emergency rooms, epidemiologists were surprised to find that people who self-identified as black were generally assigned lower risk scores than equally sick white people. As a result, black people were less likely to be referred to programs that provide more personalized care. See: There is no such thing as race in health-care algorithms (PDF link).

The algorithm assigned risk scores to patients based on total healthcare costs accrued in one year. The question is, were the designers that dumb, or was it deliberate? Higher healthcare costs are associated with more health needs, but surely there had to be other variables. African-Americans in the data set had similar overall healthcare costs to the average white person.

But if the designer looked more closely at the data, it would have been obvious that the average black person was also substantially sicker than the average white person, with a greater prevalence of diabetes, anemia, kidney failure and high blood pressure. Nevertheless, the data showed that the care provided to black people costs an average of $1,800 less per year than the care given to a white person with the same number of chronic health problems.

Which was due, clearly, to systemic racism by healthcare providers. The algorithm assigned people to high-risk categories was based on costs alone. Black people had to be sicker than white people before being referred for additional help. Only 17.7% of patients that the algorithm assigned to receive extra care were black. The researchers calculate that the proportion would be 46.5% if the algorithm were unbiased.

My take

In a previous article, I wrote about FTC taking a stand on abuses of AI, which apply to any organization from the largest to the smallest. In uncharacteristically blunt language, it stated, “Hold yourself accountable — or be ready for the FTC to do it for you.”

It proposed no new regulations but cited three existing ones that it will apply:

  • Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of — for example — racially biased algorithms.
  • Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm denies people employment, housing, credit, insurance, or other benefits.
  • Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination based on race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.

The use of the term “algorithm” is inadequate because the data and the model assumptions are just as likely, if not more so, to create unhealthy output.

In another case, the New York State Department of Financial Services (DFS) sent a Circular Letter — one with far-reaching impact. The letter was to: “All insurers authorized to write life insurance in New York State.” In recent years, life insurance companies, including Life, Accident, Health, Disability and Long Term Care, have begun to use unregulated data in their pricing, reserving, underwriting and even claims adjudication at an individual level. From the letter:

An insurer should not use an external data source, algorithm or predictive model for underwriting or rating purposes unless the insurer can establish that the data source does not use and is not based in any way on race, color, creed, national origin, status as a victim of domestic violence, past lawful travel, or sexual orientation in any manner, or any other protected class.

That section applied doubly to privacy and bias. First, keep in mind that insurance companies are regulated by the states, not the federal government. Each of the 56 jurisdictions may or may not follow this guidance, but in practice, most states follow the lead of the NYS DFS. The letter goes further:

Data, algorithms, and models that purport to predict health status based on a single or limited number of unconventional criteria also raise significant concerns about the validity of such models.

Some of those unconventional criteria include “smoker propensity models,” does your uncle have a gun, driving records, domestic violence, grocery store purchases,” etc. Keep in mind, the NYS DFS also supervises most types of insurance and reinsurance operations, including brokers and agents, as well as personal lending, such as mortgages.

With the historical tendencies of statistical bias established, we can now turn our attention to the connection between AI Ethics and the right to privacy. That will be the focus of part two.

This article was previously published o Diginomica.com.

--

--

Neil Raden

Consultant, Mathematician, Author; focused on iAnalytics Applied AI Ethics and Data Architecture