If you train something off the internet it’s bound to come out a bit racist. And I like to think that, thanks to me, it’s also slightly biased against people who put ranch dressing on pizza.
I hope you get banned for your hateful and biggoted comments. I also hate ranch on pizza, but I care about the underrepresented class of people who do. They are humans and deserve all the rights as you or I. I am appalled that this kind of blatant hatred still exists in 2023. You, sir (or ma’am, or whatever pronoun you prefer), are a loathsome person and I’m ashamed to be in the same species as you.
I hope you get banned for your hateful and bigoted comments. I also hate people who hate ranch on pizza, but I care about the underrepresented class of people who do. They are humans and deserve all the rights as you or I. I am appalled that this kind of blatant hatred still exists in 2023. You, sir (or ma’am, or whatever pronoun you prefer), are a loathsome person and I’m ashamed to be in the same species as you.
Yeah, that’s why we should discriminate against people like this, or at least feel better than them
Doin gods work 😌🙏🛐
thanks I’ll pick the racist robot doctor over not having healthcare
I completely disagree with you. Maybe it’s because I’m old, but I don’t want any damned racist robot doctor telling me what to do. I just want my good old human, racist doctor treating me; like God intended.
Ok but you could find studies that show doctors or any staff perpetuates racism. Seems like it would be less offensive coming from a computer.
But possibly more damaging, and less accountable.
It doesn’t appear to be limited to racism.
Humans inherit artificial intelligence biases
Artificial intelligence recommendations are sometimes erroneous and biased. In our research, we hypothesized that people who perform a (simulated) medical diagnostic task assisted by a biased AI system will reproduce the model’s bias in their own decisions, even when they move to a context without AI support. In three experiments, participants completed a medical-themed classification task with or without the help of a biased AI system. The biased recommendations by the AI influenced participants’ decisions. Moreover, when those participants, assisted by the AI, moved on to perform the task without assistance, they made the same errors as the AI had made during the previous phase. Thus, participants’ responses mimicked AI bias even when the AI was no longer making suggestions. These results provide evidence of human inheritance of AI bias.