• BrotherCod@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    While I agree with you 100% that programming can be affected by the programmers biases, there’s a much simpler problem that face recognition was having a hard time overcoming. At least when it was a main topic about a decade ago, sensors were having a lot of problems with the low contrast of some black people’s faces. Anyone who’s had a black friend and was a shutter bug will know what kind of problems you can run into when trying to get a proper exposure and not make a black person disappear completely from a photograph. It was just an inherent limitation of the technology they were using. The last statistics I read was something like between 20 to 30% positive matches, which we know damn well is too low for it to be a workable technology. The success rate on Caucasian and lighter skin tones weren’t even that great. There was still something like a 60% false positive match rate. The software may have gotten better over the past decade but we all know that whether it did or not, they’re still going to use it.

    • Smoogs@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This isn’t image manipulation of the 1990s. You assume it’s set on isolated pixels with massive contrast. It’s calculated by neighbor to achieve the pattern.

      This is just a result of inconsideration driving laziness that they’d crop to a median level of the graphic to cater to the skin with less reflection and reads light easier and then releasing it as ‘done’. Software is much more sophisticated than you’re giving credit. But It’s only being used to that potential in such industry as film.