• irotsoma
    link
    fedilink
    English
    arrow-up
    2
    ·
    20 hours ago

    It could totally be used effectively if and only if they do the work to train the LLM on only very specific content. But since they think the LLM shouldn’t require people to train it, and seem to believe that more content is better no matter what, this will never happen.

    But of course the other issue is that either way, the LLM will still be biased based on the content provided to it for training data. If it’s trained with religious content included, or some other set of content that some group believes is “wholesome” or “kid friendly”, it might still end up saying some pretty messed up stuff. Like if religious content is used, telling very young girls they are property owned by men (their fathers or husbands) and need to give their body freely to them, maybe not directly, but it will be implied in much of the advice it would give since that is a pretty deeply seeded belief in most current monotheistic religions and implied in many of the texts, even if it’s no longer openly practiced or legal in mainstream western societies.