As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

  • Docus@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 month ago

    It’s not just the internet. For example, students are handing in essays straight from ChatGPT. Uni scanners flag it and the students may fail. But there is no good evidence either side, the uni side detection is unreliable (and unlikely to improve on false positives, or negatives for that matter) and it’s hard for the student to prove they did not use an LLM. Job seekers send in LLM generated letters. Consultants probably give LLM based reports to clients. We’re doomed.

      • wholookshere
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 month ago

        Spoken by someone who has never felt with a learning dissability

      • Docus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        I don’t disagree, but it’s probably not that easy. Universities in my country don’t have the resources anymore to do many orals, and depending on the subject exams don’t test the same skills as coursework.