AI-infused hiring programs have drawn scrutiny, most notably over whether they end up exhibiting biases based on the data they’re trained on.
AI-infused hiring programs have drawn scrutiny, most notably over whether they end up exhibiting biases based on the data they’re trained on.
Isn’t the whole point of AI decision making to provide plausible deniability for these sort of things?
Removed by mod
Yes, but if you train an AI on racist/sexist data, it will naturally do the same.