Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.

LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.

LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.

The relevant passage, which takes effect on November 20, 2024, reads:

Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.

The platform’s Professional Community Policies direct users to “share information that is real and authentic” – a standard to which LinkedIn is not holding its own tools.

  • fluxion@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    2 months ago

    If companies don’t trust their own AI on their own sites then they are pushing a shitty unvetted algorithm and hiding behind the word “AI” to avoid accountability for their own software bugs. If we want AI to be anything other than trash then companies need to be held accountable just like with any other software they produce.