• Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 hours ago

    The issue is (as I’ve said before) that we’ve essentially created a computer program that is just as fallible as humans.

    Id say it is worse, as we have more physical presence. We can think it rains, look outside and realize somebody is spraying water on the windows and we were wrong. The LLM can only react to input, and after a correction will apologize, and then you have a high chance it will still talk about how it rains.

    We can also actually count and actually understand things, and not just predict what the next most likely word is.

    But yes, I don’t get from a security perspective people include LLMs in things, also with the whole data flows back into the LLM thing for training a lot of the LLM providers are prob doing.