Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

  • FlickOfTheBean@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Unless this is just for identifying social norms violations in written communication for the purpose of government to government communication, this seems vastly… Infeasible, I guess. Because norms change over time, and you’re going to have to be updating this model when it’s finally noticed that a change has occurred. If anything, it might generate a completely new form of grammar/phrasing expectations due to the feedback from this likely-to-not-change-very-much ruleset… As in, if you thought politically correct phrasing was annoying now, just wait until the ai says you’re not doing it well enough.

    Idk though, this isn’t my specialty area, anyone care to tell me how I’m wrong? What good can this really do?

    (I swear I did read the article, it just isn’t clicking over the sound of my loud pessimism)