I know there are other ways of accomplishing that, but this might be a convenient way of doing it. I’m wondering though if Reddit is still reverting these changes?

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    21
    ·
    10 months ago

    They almost certainly do, if only because of the practicalities of adding a new comment

    If this is true, it shifts the problem from “not having it” to “not knowing which version should be used” (to train the LLM).

    They could feed it the unedited versions and call it a day, but a lot of times people edit their content to correct it or add further info, specially for “meatier” content (like tutorials). So there’s still some value on the edits, and I believe that Google will be at least tempted to use them.

    If that’s correct, editing it with nonsense will lower the value of edited comments for the sake of LLM training. It should have an impact, just not as big as if they kept no version system.

    It would also help with any administration/moderation tasks if they could see whether people posted rule-breaking content and then tried to hide it behind edits.

    I know from experience (I’m a former Reddit janny) that moderators can’t see earlier versions of the content, only the last one. The admins might though.

    That said, one of the many Spez controversies did show that they are capable of making actual edits on the back end if they wished.

    The one from TD, right?

    • spez: “let them babble their violent rhetoric. Freeze peaches!”
    • also spez: “nooo they’re casting me on a bad light. I’m going to edit it!”
    • londos@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Honestly, parsing through version history is actually something an LLM could handle. It might even make more sense of it than without. For example, if someone replies to a comment and then the parent is edited to say something different. No one will have to waste their time filtering anything.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        10 months ago

        They could use an LLM to parse through the version history of all those posts/comments, to use it to train another LLM with it. It sounds like a bad (and expensive, processing time-wise) idea, but it could be done.

        EDIT: thinking further on this, it’s actually fairly doable. It’s generally a bad idea to feed the output of an LLM into another, but in this case you’re simply using it to pick one among multiple versions of a post/comment made by a human being.

        It’s still worth to scorch the earth though, so other human users don’t bother with the platform.