A new report states that the US House of Representatives has banned its staff members from using Microsoft's Copilot AI assistant due to possible leaks "to non-House approved cloud services.
I can’t imagine using any LLM for anything factual. It’s useful for generating boilerplate and that’s basically it. Any time I try to get it to find errors in what I’ve written (either communication or code) it’s basically worthless.
My little brother was using gpt for homework and he asked it the probability of extra Sunday in a leap year(52 weeks 2 days) and it said 3/8. One of the possible outcomes it listed was fkng Sunday, Sunday. I asked how two sundays can come consecutively and it made up a whole bunch of bs. The answer is so simple 2/7. The sources it listed also had the correct answer.
All it does it create answers that sound like they might be correct. It has no working cognition. People that ask questions like that expect a conversation about probability and days in a year. All it does is combine the two, it can’t think about it.
Really? It spotted a missing push_back like 600 lines deep for me a few days ago. I’ve also had good success at getting it to spot missing semicolons that C++ compilers can’t because C++ is a stupid language.
All LLMs are trained on open source code without any acknowledgment or compliance with the licenses. So their hard work is responsible for you being able to take advantage of it now. You can say thank you by supporting them.
It’s probably just the novelty wearing off. People expected very little from it initially, then it got hyped up. This raised expectations. Combining the raised expectations with the memory of it exceeding expectations will let you see all the flaws.
I find it useful for quickly reformating smaller sample sizes of tables and similar for my reports. It’s often far simpler and quicker to just drop that in there and say what to dp than to program a short python script
I can’t imagine using any LLM for anything factual. It’s useful for generating boilerplate and that’s basically it. Any time I try to get it to find errors in what I’ve written (either communication or code) it’s basically worthless.
My little brother was using gpt for homework and he asked it the probability of extra Sunday in a leap year(52 weeks 2 days) and it said 3/8. One of the possible outcomes it listed was fkng Sunday, Sunday. I asked how two sundays can come consecutively and it made up a whole bunch of bs. The answer is so simple 2/7. The sources it listed also had the correct answer.
All it does it create answers that sound like they might be correct. It has no working cognition. People that ask questions like that expect a conversation about probability and days in a year. All it does is combine the two, it can’t think about it.
Really? It spotted a missing
push_back
like 600 lines deep for me a few days ago. I’ve also had good success at getting it to spot missing semicolons that C++ compilers can’t because C++ is a stupid language.You can thank all open source developers for that by supporting them.
Huh?
All LLMs are trained on open source code without any acknowledgment or compliance with the licenses. So their hard work is responsible for you being able to take advantage of it now. You can say thank you by supporting them.
Ah yes, I am aware. Gotta love open source :)
Were you under the impression that I said anything to the contrary?
No, just taking any opportunity to spread the word and support open source.
deleted by creator
It’s probably just the novelty wearing off. People expected very little from it initially, then it got hyped up. This raised expectations. Combining the raised expectations with the memory of it exceeding expectations will let you see all the flaws.
I find it useful for quickly reformating smaller sample sizes of tables and similar for my reports. It’s often far simpler and quicker to just drop that in there and say what to dp than to program a short python script