The Age of the Algorithm

Roman Mars:
This is 99% Invisible. I’m Roman Mars.

Roman Mars:
On April 9th, 2017, United Airlines Flight 3411 was about to fly from Chicago to Louisville when flight attendants discovered the plane was overbooked. They tried to get volunteers to give up their seats with promises of travel vouchers and hotel combinations, but not enough people were willing to get off.

Audio Clip:
(passengers speaking)

Roman Mars:
United ended up calling some airport security officers. They boarded the plane and forcibly removed a passenger named Dr. David Dao. The officer’s ripped Dao out of his seat and carried him down the aisle of the airplane, nose bleeding while horrified onlookers shot video with their phones.

Audio Clip:
(passengers shouting)

Roman Mars:
You probably remember this incident and the outrage it generated.

News Clip:
“The international uproar continued over the forced removal of a passenger from a United Airlines flight. Today, the airline’s CEO, Oscar Munoz, issued an apology saying, ‘No one should ever be mistreated this way. I want you to know that we take full responsibility and we will…'”

Roman Mars:
But why Doctor Dao? How did he end up being the unlucky passenger that United decided to remove? Immediately following the incident, some people thought racial discrimination may have played a part. It’s possible that this played a role in how he was treated, but the answer to how he was chosen was actually an algorithm, a computer program. It crunched through a bunch of data, looking at stuff like how much each passenger had paid for their ticket, what time they checked in, how often they flew on United, and whether they were part of a rewards program. The algorithm likely determined that Dr. Dao was one of the least valuable customers on the flight at the time.

Roman Mars:
Algorithms shape our world in profound and mostly invisible ways. They predict if we’ll be valuable customers or whether we’re likely to repay a loan. They filter what we see on social media, sort through resumes, and evaluate job performance. They inform prison sentences and monitor our health. Most of these algorithms have been created with good intentions. The goal is to replace subjective judgments with objective measurements, but it doesn’t always work out like that. This subject is huge. I think algorithm design may be the big design problem of the 21st century and that’s why I wanted to interview Cathy O’Neil.

Roman Mars:
Okay. Well, thank you so much. Can you give me one of them sort of NPR-style introductions and just say your name and what you do?

Cathy O’Neil:
Sure. I’m Cathy O’Neil. I’m a mathematician, data scientist, activist, and author. I wrote the book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”.

Roman Mars:
O’Neil studied number theory and then left academia to build predictive algorithms for a hedge fund, but she got really disillusioned by the use of mathematical models in the financial industry.

Cathy O’Neil:
I wanted to have more impact in the world, but I didn’t really know that that impact could be really terrible. I was very naive.

Roman Mars:
After that, O’Neil worked as a data scientist at a couple of startups. Through these experiences, she started to get worried about the influence of poorly designed algorithms. We’ll start with the most obvious question, what is an algorithm? At its most basic, an algorithm is a step-by-step guide to solving a problem. It’s a set of instructions, like a recipe.

Cathy O’Neil:
The example I like to give is like cooking dinner for my family.

Roman Mars:
In this case, the problem is how to make a successful dinner. O’Neil starts with a set of ingredients. As she’s creating the meal, she’s constantly making choices about what ingredients are healthy enough to include in her dinner algorithm.

Cathy O’Neil:
I curate that data because those ramen noodles packages that my kids like so much, I don’t think of those as ingredients, right? I exclude them. I’m curating, and I’m therefore imposing my agenda on this algorithm.

Roman Mars:
In addition to curating the ingredients, O’Neil – as the cook – also defines what a successful outcome looks like.

Cathy O’Neil:
I’m also defining success, right? I’m in charge of success. I define success to be if my kids eat vegetables at that meal.

Roman Mars:
You know, a different cook might define success differently.

Cathy O’Neil:
My eight year old would define success to be like whether he got to eat Nutella. That’s another way where we, the builders, impose our agenda on the algorithm.

Roman Mars:
O’Neil’s main point here is that algorithms aren’t really objective, even when they’re carried out by computers. This is relevant because the companies that build them like to market them as objective, claiming they remove human error and fallibility from complex decision making. But every algorithm reflects the priorities and judgments of its human designer. Of course, that doesn’t necessarily make algorithms bad.

Cathy O’Neil:
Right. I mean, it’s very important to me that I don’t get the reputation of hating all algorithms. I actually like algorithms and I think algorithms could really help.

Roman Mars:
But O’Neil does single out a particular kind of algorithm for scrutiny. These are the ones we should worry about.

Cathy O’Neil:
They’re characterized by three properties, that they’re very widespread and important, so like they make important decisions about a lot of people. Number two, that their secret, that the people don’t understand how they’re being scored. Number three, that they’re destructive. One bad mistake in the design, if you will, of these algorithms will actually not only make it unfair for individuals but sort of categorically unfair for an enormous population as it gets scaled up.

Roman Mars:
O’Neil has a shorthand for these algorithms, the widespread, mysterious, and destructive ones. She calls them “weapons of math destruction”. To show how one of these destructive algorithms works, O’Neil points to the criminal justice system. For hundreds of years, key decisions in the legal process, like the amount of bail, length of sentence, and likelihood of parole had been in the hands of fallible human beings guided by their instincts and sometimes their personal biases.

Cathy O’Neil:
The judges are sort of famously racist, some of them more than others.

Roman Mars:
That racism can produce very different outcomes for defendants. For example, the ACLU has found that sentences imposed on black men in the federal system are nearly 20% longer than those for white men convicted of similar crimes. Studies have shown that prosecutors are more likely to seek the death penalty for African-Americans than for whites convicted of the same charges. You might think that computerized models fed by data would contribute to more evenhanded treatment. The criminal justice system thinks so too. It has increasingly tried to minimize human bias by turning to risk assessment algorithms.

Cathy O’Neil:
Like crime risk. Like what is the chance of someone coming back to prison after leaving it?

Roman Mars:
Many of these risk algorithms look at a person’s record of arrests and convictions. The problem is that data is already skewed by some social realities. Take for example the fact that white people and black people use marijuana at roughly equal rates, and yet…

Cathy O’Neil:
There’s five times as many blacks getting arrested for smoking pot as whites. Five times as many.

Roman Mars:
This may be because black neighborhoods tend to be more heavily policed than white neighborhoods, which means black people get arrested for certain crimes more often than white people. Risk algorithms detect these patterns and apply them to the future. If the past is shaped in part by racism, the future will be too.

Cathy O’Neil:
The larger point is we have terrible data here, but the statisticians involved, the data scientists are like blindly going forward and pretending that our data is good, and then we’re using it to actually make important decisions.

Roman Mars:
Risk assessment algorithms also look at the defendant’s answers to a questionnaire that’s supposed to tease out certain risk factors.

Cathy O’Neil:
They have questions like, did you grow up in a high crime neighborhood? Are you on welfare? Do you have a mental health problem? Do you have addiction problems? Did your father go to prison? They’re basically proxies for race and class, but it’s embedded in the scoring system and the judge is given the score and it’s called objective.

Roman Mars:
What does a judge take away from it? How is it used?

Cathy O’Neil:
If you have a high risk score, it’s used to send you to prison for longer and sentencing. It’s also used in bail hearings and parole hearings. If you have a high recidivism risk score, you don’t get parole.

Roman Mars:
Presumably, you could take all that biased input data and say this high chance for recidivism means that we should rehabilitate more. I mean, you could take all that same stuff and choose to do a completely different thing with the results of the algorithm.

Cathy O’Neil:
That’s exactly my point. Exactly my point. You know, we could say, “Oh, I wonder why people who have this characteristic have so much worse recidivism?” Well, let’s try to help them find a job. Maybe that’ll help. We could use those algorithms, those risk scores, to try to account for our society.

Roman Mars:
Instead, O’Neil says, in many cases, we’re effectively penalizing people for societal and structural issues that they have little control over. We’re doing it at a massive scale using these new technological tools.

Cathy O’Neil:
We’re shifting the blame, if you will, from the society, which is the one that should own these problems, to the individual and punishing them for it.

Roman Mars:
It should be said that, in some cases, algorithms are helping to change elements of the criminal justice system for the better. For example, New Jersey recently did away with their cash bail system, which disadvantaged low-income defendants. They now rely on predictive algorithms instead. Data shows that the state’s pretrial county jail populations are down by about 20%. But still, algorithms like that one remain unaudited and unregulated. It’s a problem when algorithms are basically black boxes. In many cases, they’re designed by private companies who sell them to other companies, and the exact details of how they work and are kept secret.

Roman Mars:
Not only is the public in the dark, even the companies using these things might not understand exactly how the data is being processed. This is true of many of the problematic algorithms that O’Neil has looked at, whether they’re used for sorting loan applications or assessing teacher performance.

Cathy O’Neil:
There’s some kind of weird thing that happens to people when mathematical scores are trotted out. They just start closing their eyes and believing it because it’s math. They feel like, “Oh, I’m not an expert of math, so I can’t push back.” That’s something you just see time and time again. You’re like, “Why didn’t you question this? This doesn’t make sense. Oh, well, it’s math and I don’t understand it.”

Roman Mars:
Right now it seems like because of algorithms and math, it’s just a new place to place blame so that you do not have to think about your decisions as an actual company because these things are just so powerful and so mesmerizing to us. Especially right now, they can be used in all kinds of nefarious way.

Cathy O’Neil:
They’re almost magical.

Roman Mars:
Yeah, that’s scary.

Cathy O’Neil:
It’s scary. I would go one step further than that. I feel like just by observation that these algorithms, they don’t show up randomly. They show up when there’s a really difficult conversation that people want to avoid. They’re like, “Oh, we don’t know what makes a good teacher and different people have different opinions about that. Let’s just bypass this conversation by having an algorithm score teachers, or we don’t know what prison is really for. Let’s have a way of deciding how long to sentence somebody.” We introduced these silver bullet mathematical algorithms because we don’t want to have a conversation.

Roman Mars:
In O’Neil’s book, she writes about this young guy named Kyle Behm who takes some time off college to get treated for bipolar disorder. Once he’s better and ready to go back to school, he applies for a part-time job at Kroger, which is a big grocery store chain. He has a friend who works there who offers to vouch for him. Kyle was such a good student that he figured the application would be just a formality, but he didn’t get called back for an interview. His application was red-lighted by the personality tests he’d taken when he applied for the job. The test was part of an employee selection algorithm developed by a private workforce company called Kronos.

Cathy O’Neil:
70% of job applicants in this country take personality tests before they get an interview. This is a very common practice. Kyle had that screening and he found out because his friend worked at Kroger’s that he had failed the test. Most people never find that out. They just don’t hear back. The other thing that was unusual about Kyle is that his dad is a lawyer. His dad was like, “What? What were the questions like on this test?” He said, “Well, some of them were a lot like the questions I got at the hospital, the mental health assessment.”

Roman Mars:
The test Kyle got at the hospital was called the five-factor model test, and it grades people on extroversion, agreeableness, conscientiousness, neuroticism, and openness to new ideas. It’s used in mental health evaluations. The potential employee’s answers to the test are then plugged into an algorithm that decides whether the person should be hired.

Cathy O’Neil:
His father was like, “Whoa, that’s illegal under the Americans with Disability Act.” His father and he sort of figured out together that something very fishy had been going on and his father has actually filed a class-action lawsuit against Kroger’s for that.

Roman Mars:
The suit is still pending, but arguments are likely to focus on whether the personality test can be considered a medical exam. If it is, it’d be illegal under the ADA. O’Neil gets that different jobs require people with different personality types, but she says a hiring algorithm is a blunt and unregulated tool that ends up disqualifying big categories of people, which makes it a classic weapon of math destruction.

Cathy O’Neil:
In certain jobs, you wouldn’t want neurotic people or introverted people. If you’re on a call center where a lot of really irate customers call you up, that might be a problem. In which case, it is actually legal if you get an exception for your company. The problem is that these personality tests are not carefully designed for per each business, but rather what happens is that these companies just sell the same personality test to all the businesses that will buy them.

Roman Mars:
A lot of the algorithms that O’Neil explores in her book are largely hidden. They don’t get a lot of attention. We as consumers and job applicants and employees may not even be aware that they’re humming along in the background of our lives, sorting us into piles and categories, but there is one kind of algorithm that’s gotten a lot of attention in the news lately.

News Clip:
“Is this a good or bad thing that social media has been able to infiltrate politics?”
“Social media is a technology. As we know, technologies have their good sides and the dark sides and not so good sides. It all depends on the users.”

Roman Mars:
Towards the end of our conversation, O’Neil and I started talking about the recent election and the complex ways that social media algorithms shape the news that we receive. Facebook shows us stories and ads based on what they think we want, and of course, what they think we want is based on algorithms. These algorithms look at what we clicked on before and then feed us more content we like. The result is that we’ve ended up in these information silos, increasingly polarized and oblivious to what people of different political persuasions might be seeing.

Cathy O’Neil:
I do think this is a major problem. It’s sort of the sky’s the limit. We have built the internet and the internet is a propaganda machine. It’s a propaganda delivery device, if you will, and that’s not… I don’t see how that’s going to stop.

Roman Mars:
Yeah, especially if every moment is being optimized by an algorithm that’s meant to manipulate your emotions.

Cathy O’Neil:
Right. That’s exactly going back to Facebook’s optimizer algorithm. That’s not optimizing for truth, right? It’s optimizing for profit. They claim to be neutral, but of course, nothing’s neutral.

Roman Mars:
Right.

Cathy O’Neil:
We have seen the results. We’ve seen what it’s actually optimized for, and it’s not pretty.

Roman Mars:
This kind of data-driven political micro-targeting means conspiracies and misinformation can gain surprising traction online. Stories claiming that Pope Francis endorsed Donald Trump and that Hillary Clinton sold weapons to ISIS gained millions of viewers on Facebook. Neither of those stories was true. Fixing the problem of these destructive algorithms is not going to be easy, especially when they’re insinuating themselves into more and more parts of our lives, but O’Neil thinks that measurement and transparency is one place to start, like with that Facebook algorithm and the political ads that it serves to its user.

Roman Mars:
If you were to talk to Facebook about how to inject some ethics into their optimization, what would you do? Would you sort of make a case for the bottom line of truth being like a longer tail way to make more money, or would you just say this is about ethics and you should be thinking about ethics?

Cathy O’Neil:
To be honest, if I really had their attention, I would ask them to voluntarily find a space on the web to just put every political ad, and actually every ad, just have a way for journalists and people interested in the concept of the informed citizenry to go through all the ads that they have on Facebook at a given time.

Roman Mars:
Because even if that article about Hillary Clinton and ISIS was shared thousands of times, lots of people never saw it at all.

Cathy O’Neil:
Just show us what you’re showing other people because I think one of the most pernicious issues is the fact that we don’t know what other people are seeing. I’m not waiting for Facebook to like actually go against their interests and like change their profit goal, but I do think this kind of transparency can be demanded and given.

Roman Mars:
O’Neil also says it’s important to measure the broad effects of these algorithms and to understand who they most impact.

Cathy O’Neil:
Everyone should start measuring it. What I mean by that, it’s relatively simple. This might not be a complete start, but it’s a pretty good first step, which is measured for whom this fails.

Roman Mars:
Meaning which populations are most negatively impacted by the results of these algorithms.

Cathy O’Neil:
What is the harm that befalls those people for whom it fails? How are the failures distributed across the population? If you see a hiring algorithm fail much more often for women than for men, that’s a problem, especially if the failure is they don’t get hired when they should get hired. I really do think that a study, a close examination of the distribution of failures, and the harms of those failures would really really be a good start.

Roman Mars:
If you’re not mad enough, about how algorithms influence your life, I’ve got a doozy for you after these messages.

(BREAK)

Automated Recording:
“We are currently experiencing higher call volumes the normal. Please stay on the line and an agent will be with you shortly.

Cathy O’Neil:
Here’s one that I think is kind of fun because it’s annoying and secret but you would never know it. So if you call up a customer service line, I’m not saying this will always happen but it will sometimes happen that your phone number will be used to backtrack, like who you are. You will be sussed as like are you a high-value customer or low-value customer. And if you’re a high-valued customer, you’ll talk to a customer service representative much sooner than if you’re a low-value customer – you’ll be held, put on hold longer. That’s how businesses make decisions nowadays.

Automated Recording:
“You are caller number ninety-nine. Your call is important to us. Please stay on the line.”

  1. Armin

    Great show as always! I had to chuckle, when the ad for ziprecruiter came up though: “Their powerful technology efficiently matches the right person to your job better than anyone else”. Maybe you could interview them on how their algorithm works or even how much is automated and how much is still human interaction.

    1. eminka

      hahah yup! i wanted to make the same comment. extremely ironic.

    2. Heather

      I had the same response!!!!! That why I am commenting. Yea I wonder what algorithms they use!!! But agreed… really amazing episode.

    3. Marvin

      I came here looking for this comment. I was not disappointed.
      The irony is gold.

  2. Jordan

    So the podcast about the harms of inhuman algorithms was followed by the add for “zip recruiter”, a program/tool to help employers weed out unsuitable job applicants…

    Nice…

    1. Tim

      My thoughts exactly, but I love irony so I’m giving them a pass.

  3. Brian b

    Is it just me or is the audio not playing? Tried on Chrome and on IE and it’s not working…

  4. John

    Ironic that you talk about hiring algorithms being biased then advertise for ZipRecruiter.

  5. Fedinand

    So are we going to ignore the irony (and perhaps hypocrisy) of an episode about algorithms, including a piece about how they unfairly and possibly illegally discriminate against entire groups of people for employment, that is sponsored by Zip Recruiter which undoubtedly uses them to do exactly what is criticized in the podcast?

    1. Marvin

      Had you preferred that they decided not to air this episode in order to avoid the hypocrisy?
      At least the existence of this episode lets you know that advertisers are not overly-influencing what they put out there as content.

  6. DHW

    What if (say) people who live in a particular neighborhood really are more likely to commit crimes? That is, the algorithm is telling the truth, even if Dr. O’Neil doesn’t like that truth?

    1. ABC

      So if it applies to one neighborhood, then we should apply it universally, right? What if it doesn’t apply to the next neighborhood, or the next one, or the next one?

      Get it? Pharming data like that can end up violateing two of her principles — widespread and socially destructive.

  7. Geoffrey

    Wow. Irony abounds. Two things jumped out at me. First, that the two social media fake news stories were pro-Trump and anti-Hillary. It is important to be self-aware enough to notice your own biases. I would love an app that showed me news contrary to what I assume, or from different regional points of view.

    Secondly, one of your own advertisers talked about using an algorithm to provide the best candidates. After what I heard in the story, that made me laugh.

    I love the show. It does open me up to see things I may not otherwise notice.

  8. Tyrone Malik

    Had the exact same reaction as lots in this feed…sponsored by ziprecruiter, whose advanced technology (algorithm) sorts out the “best” candidates for a job. Image not a whole lot of Tyrones and Maliks get hired on ziprecruiter.

  9. Linka

    I just finished listening to this episode via NPR One and came to this page specifically to comment on the irony of the ZipRecruiter ad in the midst of this episode. Looks like I wasn’t the only one ;-). Love your podcast otherwise but you can do better than this.

  10. Steve

    Loved the podcast. Very eye opening.
    Cathy is amazing. Her speech is mathematical in sharpness of point, efficiency and clarity of words. I need help with that.
    Like the others, the irony of the sponsor made me laugh out loud. It speaks to the pervasiveness of the algorithm.

  11. Julie

    Surely the problem is not the “maths” but the people who design the algorithms in the first place? It’s not like there are a whole bunch of robots making up algorithms as they go. These things are discussed, workshopped, designed, discussed, documented and continuously refined by *people*. Blaming the algorithm is absolving the people behind the algorithm of any responsibility.

  12. Evan bontrager

    While I still enjoy design stories of physical locations or objects, this show was excellent. Very relevant with tangible real outcomes. As always the story telling was captivating. And the tie with the United story was an incredible reminder of how we are all targets and users of these tools

    Another example of how amazing 99PI is.

  13. Jamar Berry

    I’m much more interested the song that played just before the ZipRecruiter ad than I am in the irony of the ad. The piece was so moving, as many of the musical selections of this podcast are. Just wish I had a way to own it. Hint. Hint.

    Love the show

  14. Joe Smetter

    Clinical psychology PhD student here…I just wanted to comment that I think your assessment of Kyle’s employer’s use of the Five Factor Inventory for hiring purposes wasn’t quite accurate. The Five Factor Inventory is a personality measure, and it is not typically used in mental health evaluations. The Big Five personality factors measured by the inventory are broad dimensions along which all individuals differ in their personalities. While it is correct that one of those factors, neuroticism, is characterized by a tendency to be emotionally reactive and is correlated with certain types of psychopathology (e.g. anxiety disorders), it is not accurate to say that a person has a mental health problem if they are high on neuroticism. Therefore, the fact that Kyle has a mental disorder, or a specific diagnosis of bipolar disorder is not disclosed to his employer by the results of his scores on the personality test. A seasoned clinician would not be able to make that conclusion based on the results, due to the way that “normal” individuals can score on the measure. It is for this reason that I doubt he will win his lawsuit. Personality tests like the Big Five are quite common in personnel selection, which is not my field of study. I don’t disagree with the overall conclusion of your episode, and I especially agree with the notion that analytical tools such as algorithms are only as good as the people who are using them. I just wanted to point out that in Kyle’s case, the target of his frustration is more closely related to personnel selection (industrial/organizational psychology), than to discrimination based on mental health status (clinical psychology).

    1. Kira M.

      I’m a personality psychologist, and I came by to make the same point made by Joe. I don’t use it in applied, work settings, but I do use the Big Five frequently in academic work. The Big Five or Five-Factor model, is a personality theory that people have 5 broad personality traits. The tests based on the Big five measure people’s personality on those traits in general. It is a personality test. It is NOT a mental health screening. People with no mental health issues could be high or low on any of the traits. There are some relationships with high neuroticism and mental health issues, but you cannot infer anything on the individual when you are looking at averages.

      Please be careful when you discuss psychological topics. Not everything in psychology is clinical. How these tests are applied are open to interpretation, but please be more thorough in your research on this topic in the future.

    2. Brenton Wiernik

      Workplace psychologist here. The section on Kroger’s was extremely inaccurate. The Kronos personality inventory is explicitly NOT a tool for clinical diagnosis. Using medical screening tools is illegal according to the ADA, but normal-range personality instruments are not medical tools. People vary in their personality traits, and some characteristics tend to make people better employees (e.g., being hardworking, reliable, interpersonally skilled). These types of traits are often what employers try to assess through resume screening, interviews, etc. Standardized personality assessments allow employers to assess these traits in a way that is fairer and less prone to human-introduced biases. The personality instruments that are designed for use in employee hiring are carefully tested to be reliable, valid predictors of performance and to ensure that they don’t disadvantage members of gender, racial, or mental health groups. Far from introducing bias, the past fifty years of research has very strongly shown that using a standardized hiring process reduces discrimination and bias in the process. And to reiterate, normal-range personality assessments are NOT medical diagnostic tools. They may look similar, because mental disorders in many cases represent maladaptive extremes of normal personality traits. But a normal-range personality assessment can’t provide any information about a mental disorder diagnosis. As a analogy, compare a screening tool to diagnose dyslexia with a high school English exam. They both involve reading and may have items that on the surface look similar. But you obviously can’t substitute one for the other.

  15. Michael H Light

    Hey, do you think Zip Recruiter uses an algorithm? I wonder how many people aren’t getting jobs because of their filtering? Just a thought while I was listening to your podcast.

  16. cakeslip

    I noticed the biases the other listeners did as well, but one thing that I found fascinating is the capitalist realism of the presumption that we could never ask Facebook to do something counter to their bottom line to which I say, “Bullshit.” Other industries are regulated for the good of the populace to the detriment of their profit, and Facebook could be too. The widgets it produces are memes, and I don’t mean the shallow new fangled denotation of “meme” as a pithy captioned photo.

    This seems especially astonishing as we witness the death throes of capitalism in the U.S. So many of the problematic algorithms have as their aim the maximization of capital; it seems only natural to question that goal. Surely this is the primary hard conversation society is avoiding by building a wall of math.

    We have a continual conservative outcry that social programs should be run like businesses, and the trappings of its metrics often satisfy such advocates. But the metrics have not come alone. As the guest pointed out, they have come fraught with the same capital maximization values that fathered them in the business sector.

    Look at Trump’s ridiculous one-in-two-out executive order concerning administrative regulations. The repeal of the two must fiscally cancel out the cost of the one. This leads to rank absurdities like placing monetary values on intangibles like the enjoyment we get from the beauty of a stream, or the public sense of security that our food will not poison us. I pity the agency that has to fabricate these algorithms just so we can jam these values into the coffin to rot with capitalism’s stinking corpse.

  17. Vic

    I personally think that companies should be a lot more critical of the technology they purchase and its usages. Lots of devices come with hidden risks and problems – flaws in the intelligence, glitches, secret collections and storage of private data, etc. And I believe that we, collectively as a society, need to better ourselves with the way we use tech AND with what kinds of purchases we make.

  18. br

    irony aside, has anyone applied the test to ziprecruiter’s algorithm? i have posted my resume to ziprecruiter and i am not sure if that was the site that got my resume to the person that placed me at the subsequent job, because i was posting on lots of different job posting sites. and things like this really get my ire up.

  19. Daybreaq

    Here’s another mention of where an algorithm has been problematic:

    “The agency uses a machine called a millimeter wave scanner at nearly every airport in the U.S. The machines, manufactured by L3Harris Technologies, rely on an algorithm to analyze images of a passenger’s body and identify any threats concealed by the person’s clothes.”

    Read more here: https://www.miamiherald.com/news/local/community/gay-south-florida/article234220347.html?#storylink=cpy

    The TSA agents have only two choices to classify a person going through the scanner: male or female. If the person’s body does not conform to the machine’s algorithm of what the chosen classification should look like, an alarm goes off.

Leave a Reply

Your email address will not be published. Required fields are marked *

All Categories

Minimize Maximize

Playlist