Skip to content

The Real AI Threat Isn’t Coming – It’s Already Here: 3 Cases of Algorithms Destroying Lives

Forget killer robots and super intelligent machines plotting humanity’s downfall. While the web buzzes with anxiety about AI taking our jobs or achieving consciousness, real artificial intelligence systems have already been making life-altering decisions about healthcare, housing, and basic human rights – often with devastating consequences.

In this article, I’m going to examine three real-life cases where AI systems destroyed thousands of lives. The worst part is that based on available evidence, the harm was (allegedly) done with the knowledge and intent of the humans deploying these systems. In two out of the three cases, that evidence is currently being presented in multiple lawsuits, while the third example forced the resignation of an entire government.

So if you’ve been having any kind of anxiety about a hypothetical future where AI runs amok, you can calm down now: that future has already arrived. Let’s talk about it.

Case #1: UnitedHealth, Humana, and the nH Predict system

If you watched the news or logged onto the internet at all in December, then you almost certainly heard about the assassination of UnitedHealth’s CEO, Brian Thompson. In the days following his murder, scrutiny of UnitedHealth’s shady business practices intensified on a very public level.

One of the topics of interest that was repeatedly mentioned was the company’s health insurance claim denial rate. According to a study by ValuePenguin, it clocks in at an industry-leading 32%, which is twice the industry average of 16%. 1

This statistic – along with the thousands of people posting personal anecdotes of UnitedHealth horror stories online – at least partially explain why there was overwhelming public support for the suspected assassin with little sympathy being given to the deceased CEO. 2

UnitedHealth Facebook post about CEO with thousands of laughing emojis.

While there are a lot of overlapping stories here, the one we’re going to focus on is the algorithm behind that gold medal ranking denial rate. That algorithm is called nH Predict.

Inside the algorithmic assembly line 📊

In mid-November 2023, STAT published an explosive investigation that exposed nH Predict as an “algorithmic assembly line” for processing elderly patients. 3 The system, designed by NaviHealth (a company acquired by UnitedHealth for $2.5 billion in 2020), was literally inspired by Toyota’s car manufacturing principles – because apparently, what works for mass-producing vehicles works great for managing patient care. Or so the logic goes.

The algorithm worked by analyzing patient data based on various data points, including:

  • Diagnosis and age.
  • Physical function scores.
  • Living situation details.
  • A database of six million previous patients.

Managers set strict targets that required patient stays to be within 1% of the algorithm’s predictions. Any medical professionals who questioned these decisions faced serious consequences.

The result?

According to a recent 54-page U.S. Senate report, between 2019 and 2022, UnitedHealth’s post-acute services denial rate increased from 8.7% to 22.7%. At the same time, UnitedHealth’s skilled nursing home denial rate increased ninefold. 4

This profits-over-people business model destroyed countless lives.

Real-world impact ☄️

The stories uncovered by STAT paint a haunting picture of algorithmic cruelty. Consider an elderly woman found by her grandson in the laundry room after a stroke, her right side paralyzed. The algorithm allotted her just 20 days of rehab – less than half the average time for severely impaired stroke patients. Or the 78-year-old legally blind man who, despite failing heart and kidneys, plus a fall in the nursing home, was granted only 16 days of care.

Perhaps most absurdly, one elderly patient nearing discharge after knee surgery was expected to learn how to “butt bump” up and down stairs because the algorithm said his time was up. When case managers tried advocating for these patients, they risked their jobs.

“I felt terrible,” recalled Amber Lynch, an occupational therapist and former NaviHealth case manager. “The nursing home director looked at me like I had two heads when I suggested teaching a 78-year-old to butt bump stairs. She said, ‘He’s not safe to climb stairs yet. We’re not going to do that.'” Lynch was later fired for failing to meet the algorithm’s targets.

In addition to more formal reports about the consequences of UnitedHealth’s algorithm, you can open up any YouTube video about the Brian Thompson assassination and get an endless stream of firsthand commentary:

YouTube comment describing harrowing situation where her mother had cancer in her mouth but was denied insurance coverage.

The same holds true for any news site that reported on it. It doesn’t matter where you look. All the comment threads are the same. I don’t think I’ve ever seen U.S. citizens this unified in agreement about something – ever.

The lawsuits unfold ⚖️

In November 2023, a class action lawsuit was filed in Minnesota against UnitedHealth Group, UnitedHealthcare, and NaviHealth. The case’s lead plaintiffs were the estates of Gene B. Lokken and Dale Henry Tetzloff – two elderly patients who suffered directly from these algorithmic denials.

Humana, another major insurer, also adopted nH Predict, implementing the same rigid system of automated denials and employee penalties. Their decisions led to a separate class action lawsuit, filed against them in December 2023.

The allegations in both cases are stark and laid out in the opening paragraphs of each lawsuit:

These insurance giants knowingly deployed an AI system with a 90% error rate, calculating that vulnerable patients wouldn’t have the resources to fight back because only about 0.2% of patients successfully appeal these denials. Most either deplete their savings or go without prescribed care.

As of January 2025, with scrutiny intensifying after Thompson’s death and both lawsuits ongoing, neither company has clearly stated whether they’ve stopped using nH Predict. 5

The last time Humana made a public statement regarding their use of AI tools was around the time the lawsuit was filed. They acknowledged that they use them, but made sure to point out that there’s always a human in the loop for decision making. 6

Of course what they neglected to mention is how much (or rather, how little) power those humans have to override what the algorithm tells them to do.

Case #2: RealPage YieldStar software

RealPage's AI Revenue Management page.

When I was growing up in the 90s and even as a young adult in the early 2000s, I remember landlords used to compete with each other to attract tenants. It seems like those days are long gone. Well, at least in the U.S. they are.

And it’s all thanks to the increasing use of AI algorithms to set apartment rent prices.

One such algorithm in particular has been dominating the real estate market in the U.S. It’s called YieldStar and it works by (allegedly) enabling property managers to coordinate rental rates through shared data. RealPage denies this, but critics – including the outgoing Biden administration – argue that their software has provided a mechanism for algorithmic price coordination that has significantly impacted housing affordability.

How the YieldStar algorithm works 📊

RealPage’s YieldStar software collects detailed, nonpublic data from its network of property managers about their actual rental transactions, including effective rents (after discounts), occupancy rates, and lease terms.

In 2023, the system aggregated data from over 13 million units nationwide – roughly one-third of all U.S. apartment units. 7 That number might even be higher at this point.

Each night, the algorithm analyzes the pooled competitor data to generate new price recommendations for every available apartment.

The software determines these recommendations through several key mechanisms:

  • It calculates price elasticity – how changes in rent affect tenant demand – using the aggregated competitor data.
  • It establishes “minimum” and “maximum” rent boundaries based on what nearby competitors are charging.
  • It can suggest keeping units vacant to maintain higher prices, even if that means lower occupancy.
  • It discourages individual price negotiations and rental concessions.

RealPage argues this process simply helps landlords react more quickly to market conditions. However, the system’s design enables property managers to coordinate pricing indirectly through the shared data – and push prices way beyond what they would do without the algorithm’s recommendations

A RealPage executive even admitted this in 2021:

“As a property manager, very few of us would be willing to actually raise rents double digits within a single month by doing it manually.” 8

Real-world impact ☄️

An investigation done by ProPublica and published in 2022 found that in Seattle’s Belltown neighborhood, where 70% of apartments were controlled by RealPage clients, buildings using their software showed significantly higher rent increases than non-algorithm buildings.

At one RealPage-managed property, rent for a one-bedroom apartment jumped 33% in a single year. Meanwhile, a nearby building not using the software raised rent only 3.9% during the same period. 9

On a national scale, RealPage claims its software helps landlords “outperform the market by 3 to 7 percent.” 10 The company has been able to achieve these numbers partly by shifting industry practices away from maintaining high occupancy through competitive pricing. One large property company learned it could make more profit by operating at 94 – 96 percent occupancy with higher rents, rather than keeping buildings nearly full at lower rates. 11

This systematic push for higher rents pushed homelessness in the U.S. to record-breaking levels last year. The number of people being forced out onto the streets in 2024 rose by more than 18%. 12

The situation is so dire that before leaving the White House, the Biden administration even published a post about the RealPage issue and its harmful effects on the current rental crisis in the U.S.

Biden and Co. were a bit late to the party though, as the U.S. Department of Justice had already been on the case for at least two years…

The lawsuits unfold ⚖️

Concerns about RealPage’s practices started attracting regulatory scrutiny from the U.S. Department of Justice (DOJ) in late 2022. 13 More than 30 lawsuits alleging antitrust violations were also filed against RealPage around that time (and into early 2023). 14

While RealPage announced in December 2024 that the DOJ had closed its criminal investigation, 15 the DOJ has not publicly confirmed this statement. Regardless, the company still faces significant civil litigation, including:

  • A federal class-action suit in Tennessee.
  • Multiple state-level lawsuits, including actions by the Attorneys General of Arizona and the District of Columbia.
  • The DOJ’s civil antitrust lawsuit, which alleges RealPage commands at least an 80% share of the revenue management software market. 16

In response to the ongoing lawsuits, RealPage has mounted an aggressive damage control campaign.

This includes the launching of a dedicated website to present their side of the story. On the site, the company argues that their software actually benefits renters by providing more options and flexibility in lease terms. They also maintain that acceptance rates of their pricing recommendations are much lower than alleged – less than 50% for new leases according to their data.

The DOJ isn’t buying their public relations blitz though.

Biden’s Deputy Attorney General Lisa Monaco dismissed it with the following statement:

“By feeding sensitive data into a sophisticated algorithm powered by artificial intelligence, RealPage has found a modern way to violate a century-old law through systematic coordination of rental housing prices – undermining competition and fairness for consumers in the process. Training a machine to break the law is still breaking the law.” 17

Of course with the tech-oligarch-friendly Trump administration now coming into power, RealPage might actually catch a legal break, which will no doubt only worsen the situation for millions of renters across the U.S.

Case #3: The Dutch tax authority algorithm

In the final case, I want to hop over to the Netherlands for a trip down memory lane. While this particular example is older than the other two, it only serves to show that algorithms have been wrecking havoc on human lives for quite some time now – well before the modern AI craze spearheaded by the launch of ChatGPT.

The algorithm backstory 📊

In 2013, the Dutch tax authority implemented a self-learning algorithm to create risk profiles for detecting fraud in childcare benefit applications. The system was developed as a response to a €4 million benefits scam, where Dutch officials were put under intense pressure to prevent future fraud. 18

The algorithm operated through several key mechanisms:

  • It automatically assigned risk scores based on predefined criteria.
  • It incorporated a discriminatory “nationality flag” that targeted people with foreign citizenship.
  • It applied the Pareto principle, arbitrarily assuming 80% of investigated cases were fraudulent.
  • It integrated with a secret blacklist system that tracked both proven and unsubstantiated fraud signals.

The system also created a discriminatory feedback loop. This happened because minorities (who often had foreign citizenship) were flagged as suspicious. The self-learning algorithm then increasingly (and unjustifiably) associated minority status with fraud risk.

Real-world impact ☄️

The toll of the algorithm’s decision-making ended up devastating thousands of lives across the Netherlands. The case of Chermaine Leysner exemplifies the system’s brutality.

A student and mother of three young children, she was falsely accused of fraud and received a sudden demand to repay over €100,000 in benefits. 19 Working through the ordeal lasted nine years. It led her to depression, separation from her children’s father, and severe financial hardship.

Overall, more than 26,000 families were wrongly accused of fraud, 20 with an overrepresentation of errors among minority communities. A 2023 report by the Netherlands statistics office found that 71% of the parents accused of fraud were first or second-generation immigrants, 44% of which came from the country’s lowest-income households. 21

When all was said and done, thousands of children were placed in foster care and multiple victims committed suicide. 22

It took about seven years from the algorithm’s inception to it becoming a full-blown public scandal in 2020. That year, investigations were published that revealed the shocking scale of bias inherent in the flawed system. The Dutch Data Protection Authority ended up fining the tax administration €2.75 million for unlawful discrimination. 23 Several months later, in January 2021, the entire Dutch government resigned in acknowledgment of this dark page in their history. 24

The government has since:

  • Promised €30,000 compensation per affected family.
  • Created a new algorithm oversight authority.
  • Admitted to institutional racism.
  • Implemented new safeguards for AI systems.

However, a July 2024 report from Dutch privacy watchdog AP found that despite taking these measures, public authorities continued to employ discriminatory algorithms throughout 2023. 25 This leads us to wonder how much of the Dutch government response was performative and how effective these supposed safeguards were.

What are we to make of all this? 🤔

As someone who writes and reports on AI topics frequently, I often come across blog posts, X threads, and social media posts that talk about the future threat of AI.

Google search results about AI taking over jobs.

These range from the ever popular “AI is going to take our jobs” to more sci-fi takes about AI reaching a level of intelligence where it will destroy all of humanity.

While these concerns are valid, I don’t think enough people are paying attention to the present and the fact that AI systems are already wreaking havoc at scale. I only covered three here, but there are lots more. Last December’s assassination of the UnitedHealthcare CEO has somewhat shifted the spotlight, but I fear it’s only for a fleeting moment.

It shouldn’t be.

This might come across as a bit idealistic, but I believe we need to approach this collectively by making better decisions within whatever organizations we are part of. We need to re-think our relationship to technology and how we use it in our work.

If you are a decision maker in your company and you are considering adopting a new AI tool because it will increase profits, you should also ask yourself questions like:

  • How will the use of this tool impact other humans?
  • Would I want to be on the other side of whatever this new tool is going to do?

Everyone is going to answer those questions differently. Some might not care to ask. But I hope that by reading this, that at least some of you will think about it.

What do you think? Are we too focused on the “future threat” of AI while ignoring or minimizing the severity of the damage these systems are causing in the present moment? Let me know in the comments.

Don’t forget to join our crash course on speeding up your WordPress site. Learn more below:

References
  1. https://www.valuepenguin.com/health-insurance-claim-denials-and-appeals#denial-rates ↩︎
  2. https://gizmodo.com/bitter-americans-react-to-unitedhealthcare-ceos-murder-my-empathy-is-out-of-network-2000534520 ↩︎
  3. https://www.statnews.com/2023/11/14/unitedhealth-algorithm-medicare-advantage-investigation/ ↩︎
  4. https://www.healthcaredive.com/news/medicare-advantage-AI-denials-cvs-humana-unitedhealthcare-senate-report/730383/ ↩︎
  5. https://www.computerworld.com/article/3619010/after-shooting-unitedhealthcare-comes-under-scrutiny-for-ai-use-in-treatment-approval.html ↩︎
  6. https://www.lpm.org/news/2023-12-14/lawsuit-claims-humana-uses-ai-to-deny-necessary-health-care-services-to-medicare-advantage-patients ↩︎
  7. https://www.propublica.org/article/yieldstar-rent-increase-realpage-warren-sanders ↩︎
  8. https://www.propublica.org/article/yieldstar-rent-increase-realpage-rent ↩︎
  9. https://www.propublica.org/article/yieldstar-rent-increase-realpage-rent ↩︎
  10. https://www.propublica.org/article/yieldstar-rent-increase-realpage-rent ↩︎
  11. https://www.propublica.org/article/yieldstar-rent-increase-realpage-rent ↩︎
  12. https://www.bbc.com/news/articles/cx2vwdw7zn2o ↩︎
  13. https://www.propublica.org/article/yieldstar-realpage-rent-doj-investigation-antitrust ↩︎
  14. https://www.multifamilydive.com/news/algorithmic-software-antitrust-price-fixing-rents/707024/ ↩︎
  15. https://www.bisnow.com/national/news/multifamily/realpage-says-doj-ends-criminal-investigation-into-rental-housing-pricing-127115 ↩︎
  16. https://www.justice.gov/opa/pr/justice-department-sues-realpage-algorithmic-pricing-scheme-harms-millions-american-renters ↩︎
  17. https://www.justice.gov/opa/pr/justice-department-sues-realpage-algorithmic-pricing-scheme-harms-millions-american-renters ↩︎
  18. https://www.theparliamentmagazine.eu/news/article/ai-bias-will-europes-race-for-efficiency-reinforce-discrimination ↩︎
  19. https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/ ↩︎
  20. https://www.vice.com/en/article/how-a-discriminatory-algorithm-wrongly-accused-thousands-of-families-of-fraud/ ↩︎
  21. https://www.cbs.nl/nl-nl/longread/aanvullende-statistische-diensten/2022/jeugdbescherming-en-de-toeslagenaffaire/7-conclusies ↩︎
  22. https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/ ↩︎
  23. https://iapp.org/news/b/dutch-dpa-fines-tax-authority-2-75m-euros ↩︎
  24. https://www.lighthousereports.com/investigation/the-algorithm-addiction/ ↩︎
  25. https://www.theparliamentmagazine.eu/news/article/ai-bias-will-europes-race-for-efficiency-reinforce-discrimination ↩︎
Yay! 🎉 You made it to the end of the article!
Martin Dubovic
Share:

0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments

Or start the conversation in our Facebook group for WordPress professionals. Find answers, share tips, and get help from other WordPress experts. Join now (it’s free)!