• matter@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      When buggy software is used by unreasonably powerful entities to practise (and defend) discrimination that’s dystopian…

      • SCB@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        Except it wasn’t actually launched, and they didn’t defend its discrimination but rather ended the project.

    • kase@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      Ah ok. I don’t know much about it, but I’ve heard that AI could sometimes be negative toward commonly discriminated against groups because the data that it’s trained with is. (Side note: is that true? someone pls correct me if it’s not). I jumped to the conclusion that this was the same thing. My bad

      • adrian783@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        8 months ago

        what it did it expose just how much inherent bias there is in hiring. even just name and gender alone.

      • SCB@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        8 months ago

        That is both true and pivotal to this story

        It’s a major hurdle in some uses of AI

      • TAG@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        An AI is only as good as its training data. If the data is biased, then the AI will have the same bias. The fact that going to a women’s college was considered a negative (and not simply marked down as an education of unknown quality) is proof against the idea that many in the STEM field hold (myself included) that there is a lack of qualified female candidates but not an active bias against them.