Oh poor baby, do you need the dishwasher to wash your dishes? Do you need the washing mashine to wash your clothes? You can’t do it?
how about fucking your wife?
What I wrote was fucking my wife.
Had it write some simple shader yesterday cause I have no idea how those work. It told me about how to use the mix and step functions to optimize for GPUs, then promptly added some errors I had to find myself. Actually not that bad cause now after fixing it I do understand the code. Very educational.
This is my experience using it for electrical engineering and programming. It will give me 80% of the answer, and the remainder 20% is hidden errors. Turns out the best way to learn math from GPT is to ask it a question you know the answer (but not the process) to. Then, reverse engineer the process and determine what mistakes were made and why they impact the result.
Alternatively, just refer to existing materials in the textbook and online. Then you learn it right the first time.
thank you for that last sentence because I thought I was going crazy reading through these responses.
Some people retain information easier by doing than reading.
ok, I finally figured out my view on this I believe. I was worried I was being a grumpy old man who was just yelling at the AI (still probably am, but at least I can articulate why I feel this is a negative reply to my concerns)
It’s not reproducible.
I personally don’t believe asking an AI with a prompt then “troubleshooting” it is the best educational tool for the masses to be promoted to each-other. It works for some individuals, but as you can see the results will always vary with time.
There are so many promotional and awesome educational tools that emphasize the “doing” part instead of reading. You don’t need to ask an AI prompt then try to fix all the horrible shit when there is always a statistically likely chance you will never be able to solve it and the AI gave you an impossible answer to fix.
I get some people do it, some people succeed, and some people are maybe so lonely that this interaction is actually preferable since it seems like some weird sort of collaboration. The reality is that the AI was trained unethically and has so many moral and ethical repercussions that just finding a decent educator or forum/discord to actually engage with is whole magnitudes better for society and your own mental processes.
In this context AI-gen codescript sounds like a fast track to a final exam before qualification
Shaders are black magic so understandable. However, they’re worth learning precisely because they are black magic. Makes you feel incredibly powerful once you start understanding them.
I used it yesterday because I couldn’t get mastodon’s version of http signing working. It spat out a shell script which worked, which is more than my attempts did.
mastodon’s version of http signing
I hate it so damn much
Same. They don’t even use the standard that was finalised, they are using an RFC draft that expired in 2022.
Well, that escalated quickly.
Do you need a car to go 55 mph? AI is a tool. The problem is the ownership of these tools.
The ownership, energy cost, reliability of responses and the ethics of scraping and selling other people’s work, but yeah.
All things that could have been/could be fixed, but let’s dunk on end users with memes instead.
I just had Copilot hallucinate 4 separate functions, despite me giving it 2 helper files for context that contain all possible functions available to use.
AI iS tHe FuTuRE.
Even if it IS the future, it is not the present. Stop using it now.
Not the person you replied to, but my job wants us to start using it.
The idea that this will replace programmers is dumb, now or ever. But I’m okay with a tool to assist. I see it as just another iteration of the IDE.
Art and creative writing are different beasts altogether, of course.
My wife uses AI tools a lot (while I occasionally talk to Siri). But she uses it for things like: she’s working on a book and so she used it to develop book cover concepts that she then passed along to me to actually design. I feel like this is the sort of thing most of us want AI for—an assistant to help us make things, not something to make the thing for us. I still wrestle with the environmental ethics of this, though.
The environmental impacts can be solved easily by pushing for green tech. But that’s more a political problem that a technical problem IMO. Like stop subsizing oil and gas and start subsizing nuclear (in the short term) and green energy in the long term.
It’s cutting my programming work in half right now with quality .NET code. As long as I stay in the lead and have good examples + context in my codebase, it saves me a lot of time.
This was not the case for co-pilot though, but Cursor AI combined with Claude 3.7 is quite advanced.
If people are not seeing any benefit, I think they have the wrong use cases, workflow or tools. Which can be fair limitations depending on your workplace of course.
You could get in a nasty rabbit hole if you vibe-code too much though. Stay the architect and check generated code / files after creation.
I use it extremely sparingly. I’m critical of anything it gives me. I find I waste more time fixing its work and removing superfluous code more than I do gaining value from it.
Our tiny company of software engineers have embraced it in the IDE for what it is, a tool.
As a tool we have saved a crazy amount of man hours and as I don’t work for ghouls we recently got pay increases and a reduction in hours.
There are only 7 of us including the two owner / engineers and it’s been a game changer.
The Copilot code completion in VSCode works surprisingly well. Asking Copilot in the web chat about anything usually makes me want to rip my hair out. I have no idea how these two could possibly be based on the same model
It quite depends on your use case, doesn’t it? This decades-old phrase about an algorithm in Fractint always stuck with me: “[It] can guess wrong, but it sure guesses quickly!”
Part of my job is getting an overview - just some generic leads and hints - about topics completely unknown to me, really fast. So I just ask an LLM, verify the links it gives and create a response within like 10-15 minutes. So far, no complaints.Yeah I find the code completion is pretty consistent and learns from other work I’m doing. Chat and asking it to do anything though is a hallucinogenic nightmare.
I don’t need Chat GPT to fuck my wife but if I had one and her and Chat GPT were into it then I would like to watch Chat GPT fuck my wife.
good on you!
Thanks. So I have Chat GPT, now all I need is a wife… Maybe I should ask Chat GPT for one.
nah, I want chat gpt to be my wife, since I don’t have a real one
/s
I would rather date an operating system than an LLM. The OS can be trusted with elevated rights and access to my files.
Spoiler
Assuming Samantha is Linux-based, of course, I would never date a non-free OS.
BSD waifus in shambles
Fuck /s.
I want a fuckable android… Uhh I mean maid wife.
ITT
I used Ai recently and it got some details wrong so it is entirely useless for anyone, anywhere under any circumstances, even though it’s less than six years old as a technology!
crosses arms
You guys are like newspaper men in the 1940s raging about TV being an experimental failure.
What is up with the rise of pro AI people on here? I just “talked” to some kind of person in support of it. Are tankies pro AI now?
Most don’t have a problem with AI itself and even find it useful in its proper use cases
What most of us hate about it is the corporations shoving it every which way where it doesn’t belong, doesn’t work and down all our throats for profits so they can make line go up
Seconded. I genuinely understand most of the hate against AI, but I can’t understand how some people are so completely against any possible implementation.
Sometimes, an LLM is just good at rewording documentation to provide some extra context and examples. Sometimes it’s good for reformatting notes into bullet points, or asking about that one word you can’t put your finger on but generally remember some details about, but not enough for the thesaurus to find it.
Limited, sure, but not entirely useless. Of course, when my fucking charity fundraising platform starts adding features where you can speak to it and tell it “donate $x to x charity” instead of just clicking the buttons yourself, and that’s where the development budget is going… yeah, I’m not exactly happy about that.
It’s pretty common to be pro-AI or at least neutral about it. Lemmy seems to have an echo chamber of hating it as far as I can tell, so maybe it’s just new people coming in?
Might be people who don’t give a shit one way or another are getting tired of the front page being filled with anti-AI memes every day.
thing I don’t like is on the rise
must be the tankies
who else would it be? They’re the only people on the internet, don’t you know?
Yes, and it must be fascists. But, I repeat you.
It’s useful. I ain’t letting it write software. But I can let it write my stupid report and paperwork while feeding him the important bit. Because I really don’t want to bother.
If that report and paperwork is inane, rat race stuff, i won’t be as hard. But if that is part of school work, you’re mentally cooked then.
If the ubiquity of LLMs kills the MLA essay, it’ll be worth the price.
I haven’t encountered an English teacher that knows how to teach someone how to make an effective argument or state a compelling case, what they know how to do is strictly adhere to the MLA handbook and spot minor grammatical pet peeves. From high school to university I’ve never had an English teacher call me up to discuss my paper to talk about how I could have more effectively made a point, but I’ve gotten commas written over with semicolons.
This comment completelt encapsulates what is wrong. MLA essays? The format is going to die? You have issues with shitty teachers, where are there problems due to the systems in place, you are alright with AI taking away an important human experience? Like come on. Can we stop using AI and critically think for a bit.
So, first, Yeah I’d be in favor of killing the format itself; the MLA format seems to have two functions: 1. to force tens of thousands of young adults to buy MLA handbooks every semester from college book stores, and 2. to serve as a warning to any reader that the article they’ve found was written by an ENG112 student who didn’t give the first squirt of a monday morning’s piss about it because it was assigned to him more as a barrier for him to dodge than an exercise to strengthen him. Actual scholarship is done in the APA format and we’d be better off if we just taught that.
Second, I reject the notion that writing tedious research papers qualifies as “an important human experience.” Again, a lot of folks are forced to dabble in it similarly to how they’re forced to dabble in mathematical proofs: once or twice in high school and once or twice in college, they’re required to rote memorize something for a couple weeks. I’m rather convinced that a lot of the time taken in school from about 7th grade up is designed to appear academic more than actually be academic.
Third, I’m in the camp that says scrap most of the idea we have of formal academic writing, for multiple reasons. Chiefly, the more of those worthless English teachers we can put back into food service where they belong, the better. Stepping a little bit out of my comedy internet persona a bit, I do believe the idea of “impersonal, dry, boring, jargon-laden, complicated” research papers has gone beyond any practical function it may have had. There is something to be said for using standardized language, minimizing slang and such. To me, that would be a reason to write in something like VoA simplified English rather than Sesquipedalian Loquaciousness. I’m also not alone in the idea that scientific concepts and research are getting to the point that text on page isn’t the right tool for conveying it; Jupyter notebooks and other such tools are often better than a stodgy essay.
Fourth, undoing the rigorous formats of “scholarly articles” may deal a blow to junk science. I’ve seen English teachers point to the essay format, presence of citations, presence in journals etc. as how you tell a written work has any merit. In practice this has meant that anyone wanting to publish junk science has an excellent set of instructions on how to make it look genuine. Hell, all you’ve got to do is cite a source that doesn’t exist and you’ve created truth from whole cloth, like that “you swallow 8 spiders in your sleep” thing.
Finally, whether the problem lies in the bureaucracy that creates curricula or individual teachers, I’m in favor of forcing their hands by eliminating the long-form essay as a thing they can reasonably expect students to do on a “here’s this semester’s busywork, undergrad freshman” basis.
I had the same experience, but I recently helped my sister with a homework essay and she had a full page with the exact requirements and how they were graded.
90% of the points were for content, the types of arguments, proper structure and such. Only 10% were for spelling and punctuation.
Meaning she could hand in a complete mess, but as long as her argument was solid and she divided the introduction, arguments and conclusion into paragraphs, she’d still get a 9/10. No grumpy teachers docking half her grade for a few commas. She gets similar detailed instructions for every subject where I used to struggle with vague assignments like “give a good presentation”. It was so bad sometimes, the teacher let the class grade each other.(Note we aren’t American, not even English.)
Ho no, I used the reformating and rewording machine to do something that I was doing for years and still suck at doing. I can’t write a legible well formated sentence to save my life. I just feed it whatever it needs to say, it plays it’s magic and get something that looks understandable to my coworkers.
Okay I am exaggerating, but I really struggle to be either concise and this helps me.
But this is a problem with society. AI won’t fix this at all. It will just exacerbate the issue.
Btw, I’d suggest to install deepseek (or any other model) locally so that you don’t give your data for free to others (also for security reasons).
Take advantage that it is a free/open software and somewhat easy to install.
Pretty much how I use it. Unimportant waste of time tasks like forms from HR and mandatory “anonymous” surveys. Refuse to do it until told directly and then get AI to write the most inoffensive and meaningless corporate bullshit.
Of course not having to do the task at all would be a more efficient use of my time but we get ignored when we say these forms are pointless. Not heard any one day anything positive about them in over a year.
Name one use.
Converting technician to HR
I just got a high priority request to write a couple of VBA macros to fetch data from a database then use it to make a bunch of API queries. I know VBA about as well as I know Chinese or Icelandic. I figured out the query and told Chat GPT to use it in a macro. It wrote all the VBA code. I went through a few rounds of “fix this bug, add this feature” and now my client is happy and I didn’t have to think much about VBA. I knew what and how to ask it for what I wanted and it saved me days of searching google and reading about VBA. That’s high value to me because I don’t care about VBA and don’t really want to know how to use it.
So use it on things no one cares about, got it.
I mean, its great finding me sources of information.
Could google do it? Not as good as it used too or as good as ai, but yes.
I also use it to format my brain storms.
Or where it’s located or what the name of the function i need.
Or if i want info on items i am looking to buy, like what the benefit of waxing my bike chain is over oils. That sort of thing
Ai is absolutely amazing at finding you misinformation, no doubt about that.
You are needlessly combative.
if i have reason to doubt it, and that depends on how i feel the importance on the data i am working with, it gives me the source of its information. So i can verify it. Google is even worse at providing misinformation because it’s just a hose of information with a bunch of ads hidden amongst it, and people bidding for elevated results, and spam.
No matter what tool you use to harness the internet, it’s up to the user to parse that info.
And a lot of that spam is now written by ChatGPT.
True but irrelevant.
In addition to niche political audiences, Lemmy is full of tech professionals who have probably integrated AI into their daily workflow in some meaningful ways.
People hate new things, they get used to them when they become actually useful and forget what was the reason in the first place repeat ad nauseam
It’s called astroturfing.
I can see how this is going to be a real cuckold kink in a few years
People are constantly getting upset about new technologies. It’s a good thing they’re too inept to stop these technologies.
People are also always using one example to illustrate another, also known as a false injunction.
There is no rule that states all technology must be considered safe.
Every technology is a tool - both safe and unsafe depending on the user.
Nuclear technology can be used to kill every human on earth. It can also be used to provide power and warmth for every human.
AI is no different. It can be used for good or evil. It all depends on the people. Vilifying the tool itself is a fool’s argument that has been used since the days of the printing press.
While this may be true for technologies, tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb. In the rifle, the technology of the mechanisms in the gun is the same precision-milled clockwork engineering that is used for worldwide production automation. The technology of the harnessing of a nuclear chain reaction is the same, whether enriching uranium for a bomb or a power plant.
HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose. In these cases, that SOLE purpose is to, in an incredibly short period of time, with little effort or skill, enable the user to end the lives of as many people as possible. You can never use a bomb as a power plant, nor a rifle to alleviate supply shortages (except, perhaps, by a very direct reduction in demand). Here, our problem has never been with the technology of Artificial Neural Nets, which have been around for decades. It isn’t even with “AI” (note that no extant “AI” is actually “intelligent”)! No, our problem is with the tools. These tools are made with purpose and intent. Intent to defraud, intent to steal credit for the works of others, and the purpose of allowing corporations to save money on coding, staffing, and accountability for their actions, the purpose of having a black box a CEO can point to, shrug their shoulders, and say “what am I supposed to do? The AI agent told me to fire all of these people! Is it my fault that they were all <insert targetable group here>?!”
These tools cannot be used to know things. They are probabilistic models. These tools cannot be used to think for you. They are Chinese Rooms. For you to imply that the designers of these models are blameless — when their AI agents misidentify black men as criminals in facial recognition software; when their training data breaks every copyright law on the fucking planet, only to allow corporations to deepfake away any actual human talent in existence; when the language models spew vitriol and raging misinformation with the slightest accidental prompting, and can be hard-limited to only allow propagandized slop to be produced, or tailored to the whims of whatever despot directs the trolls today; when everyone now has to question whether they are even talking to a real person, or just a dim reflection, echoing and aping humanity like some unseen monster in the woods — is irreconcilable with even an iota of critical thought. Consider more carefully when next you speak, for your corporate-apologist principles will only help you long enough for someone to train your beloved “tool” on you. May you be replaced quickly.
You’ve made many incorrect assumptions and setup several strawmen fallacies. Rather than try to converse with someone who is only looking to feed their confirmation bias, I’ll suggest you continue your learnings by looking up the Dunning Kruger effect.
EDIT: now I understand. After going through your comments, I can see that you just claim confirmation bias rather than actually having to support your own arguments. Ironic that you seem to show all of this erudition in your comments, but as soon as anyone questions your beliefs, you just resort to logical buzzwords. The literal definition of the bias you claim to find. Tragic. Blocked.
Blocking individual on Lemmy is actually quite pointless as they still can reply to your comments and posts you just will not know about it while there can be whole pages long slander about you right under your nose
I’d say it’s by design to spread tankie propaganda unabated
Blocking means that you don’t have to devote your time and thoughts to that person. That’s pretty valuable. And even if they decide they are going to attack you, not-responding is often a good strategy vs that kind of crap anyway - to avoid getting pulled into an endless bad-faith argument. (I’d still suggest not announcing that you’ve blocked them though. Just block and forget about it.)
You know what? They can go ahead and slander me. Fine. Good for them. They’ve shown they aren’t interested in actual argument. I agree with your point about the whole slander thing, and maybe there is some sad little invective, “full of sound and fury, signifying nothing”, further belittling my intelligence to try to console themself. If other people read it and think “yeah that dude’s right”, then that’s their prerogative. I’ve made my case, and it seems the best they can come up with is projection and baseless accusation by buzzword. I need no further proof of their disingenuity.
Can you point out and explain each strawman in detail? It sounds more like someone made good analogies that counter your point and you buzzword vomited in response.
Dissecting his wall of text would take longer than I’d like, but I would be happy to provide a few examples:
- I have “…corporate-apologist principles”.
— Though wolfram claims to have read my post history, he seems to have completely missed my many posts hating on TSLA, robber barons, Reddit execs, etc. I completely agree with him that AI will be used for evil by corporate assholes, but I also believe it will be used for good (just like any other technology).
- “…tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb” “HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose”
— Tools are neutral. They have more than one purpose. A nuclear bomb could be used to warm the atmosphere another planet to make it habitable. Not to mention any weapon can be used to defend humanity, or to attack it. Tools might be designed with a specific purpose in mind, but they can always be used for multiple purposes.
There are a ton of invalid assumptions about machine learning as well, but I’m not interested in wasting time on someone who believes they know everything.
I understand that you disagree with their points, but I’m more interested in where the strawman arguments are. I don’t see any, and I’d like to understand if I’m missing a clear fallacy due to my own biases or not.
Every tech can be safe and unsafe? I think you’ve oversimplified to the point of meaninglessness. Obviously some technologies are safer than others, and some are more useful than others, and some have overwhelming negative effects. Different tech can and should be discussed and considered on a case by case basis - not just some “every tech is good and bad” nonsense.
My big problems with AI are the climate cost and the unethical way that a lot of these models have been trained. If they can fix those, then yeah I don’t have an issue with people using it when it’s appropriate but currently lots of people are using it out of sheer laziness. If corpos are just using it to badly replace workers and kids are using it instead of learning how to write a fucking paragraph properly, then yeah, I’ll hate on AI
Been this way since the harnessing of fire or the building of the wheel.
Those fools do not realize that creating the torment nexus is just the same as inventing the wheel!
I am very smart!
Commenting on your torment tablet
TBH sometimes being on the internet is self harm.
isn’t that comic gas company propaganda, or am i rememberong it wrong
You’re not going to find any disciples of Schumpeter in this thread, I’m afraid.
?
ChatGPT is learning from my fucking. All males will be amazing at oral sex and learning to last “almost too long”.
Too bad everyone will be fucking robots by then
Joke on you, my wife and I are into it!
It’s such a weird question. Why would I need ChatGPT to fuck my wife when we have the Dildoninator 9000 with Vac-u-loc attachments and King fu grip?
Kwebbelkop making an AI generated video from an AI generated prompt for his AI to react to:
Kwebbelkop said on a Dutch tv channel that all the dutch speaking youtubers were insignificant compared to him. Now the only thing he produces is garbage and the Dutch youtubers still make authentic content.
Fuck kwebbelkop.
deleted by creator
deleted by creator
To me the worse thing is, my collage uses ai to make the tests, I can see it’s made by it because of multiple correct options, and in a group the teacher said something like “why lost 1h to make when ai can make it in seconds”
I like to use ai to “convert” citations, like APA to ABNT, I’m lazy for it and it’s just moving the position of the words so yeah