The answer is: not necessarily. Most of the bacteria on our skin are adapted to living in wet environments, so they will not suffocate. However, some bacteria may be washed away or killed by the chlorine in the pool.
They don’t, and people are way too blasé about how “oh it’s actually the same as googling because it’s just taking from sources online anyway,” when in reality it does nothing to “keep” the knowledge it gets from those sites and is just stringing together words that often go together. It’s like thinking your phone’s predictive text can answer your questions, if your phone also invented quotes and sources (this has already been an issue with journalists and lawyers using ChatGPT to “research”).
Thank god someone who understands. I hated how towards the end reddit was so full of misinformation and people talking out of their ass with confidence. Hope Lemmy can steer away from those tendencies. It’s okay if we don’t have the answer sometimes.
That would be right if they understood/knew what they were talking about. It’s more akin to really advanced autocorrect that sounds/reads like something the ai was trained on. So it sounds correct but really has 0 basis on truth other than “the model predicts a human would say X next”. Truth is rarely the goal of any of these machine learning language models afaik.
Yeah, I’m aware. There were like 10 comments with no replies, so I thought it’d be fun to see what the Chatbot would say. I didn’t take its answer to seriously, but I knew people might be sensitive to the answer. It would have been unfair of me to not say that it was though. Now people can at least decide whether or not to discard the information by providing a “source”.
Here’s what Chatgpt/google bard have to say:
The answer is: not necessarily. Most of the bacteria on our skin are adapted to living in wet environments, so they will not suffocate. However, some bacteria may be washed away or killed by the chlorine in the pool.
Why are we relying on language models to answer questions. These things don’t really “know” anything right?
They don’t, but they sound as convincing, (and are probably as correct) as a random blog you’d find googling your question
No one knows anything get over yourself buddy - it gave a correct answer way more polite than I ever could so who’s gonna complain
They don’t, and people are way too blasé about how “oh it’s actually the same as googling because it’s just taking from sources online anyway,” when in reality it does nothing to “keep” the knowledge it gets from those sites and is just stringing together words that often go together. It’s like thinking your phone’s predictive text can answer your questions, if your phone also invented quotes and sources (this has already been an issue with journalists and lawyers using ChatGPT to “research”).
Thank god someone who understands. I hated how towards the end reddit was so full of misinformation and people talking out of their ass with confidence. Hope Lemmy can steer away from those tendencies. It’s okay if we don’t have the answer sometimes.
Don’t they pull from online sources? So it’s basically googling with extra steps and an unpredictable middleman
That would be right if they understood/knew what they were talking about. It’s more akin to really advanced autocorrect that sounds/reads like something the ai was trained on. So it sounds correct but really has 0 basis on truth other than “the model predicts a human would say X next”. Truth is rarely the goal of any of these machine learning language models afaik.
Rabbits
People are downvoting because of the first line.
Yeah, I’m aware. There were like 10 comments with no replies, so I thought it’d be fun to see what the Chatbot would say. I didn’t take its answer to seriously, but I knew people might be sensitive to the answer. It would have been unfair of me to not say that it was though. Now people can at least decide whether or not to discard the information by providing a “source”.