- cross-posted to:
- hackernews@lemmy.smeargle.fans
- technology@lemmy.zip
- cross-posted to:
- hackernews@lemmy.smeargle.fans
- technology@lemmy.zip
There is a discussion on Hacker News, but feel free to comment here as well.
What could go wrong, Facebook did so well with the data they had before!! Let’s throw some cocaine and mescaline in there for good measure
This is the best summary I could come up with:
Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence.
The Information’s report quotes Jon Carvill, who represents Meta, as saying that the company will “continue to prioritize and invest in safe and responsible AI development.” He added that although the company is splitting the team up, those members will “continue to support relevant cross-Meta efforts on responsible AI development and use.”
The team already saw a restructuring earlier this year, which Business Insider wrote included layoffs that left RAI “a shell of a team.” That report went on to say the RAI team, which had existed since 2019, had little autonomy and that its initiatives had to go through lengthy stakeholder negotiations before they could be implemented.
RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms.
Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest, WhatsApp AI sticker generation that results in biased images when given certain prompts, and Instagram’s algorithms helping people find child sexual abuse materials.
Moves like Meta’s and a similar one by Microsoft early this year come as world governments race to create regulatory guardrails for artificial intelligence development.
The original article contains 356 words, the summary contains 231 words. Saved 35%. I’m a bot and I’m open source!