I absolutely hate AI. I’m a teacher and it’s been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don’t bother to think critically about the answers the AI gives and just assume it’s 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.
People seem to think there’s an “easy” way to learn with AI, that you don’t have to put in the time and practice to learn stuff. News flash! You can’t outsource creating neural pathways in your brain to some service. It’s like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.
Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don’t allow any electronic devices during exams.
As a student i get annoyed thr other way arround. Just yesterday i had to tell my group of an assignment that we need to understand the system physically and code it ourselves in matlab and not copy paste code with Chatgpt, because its way to complex. I’ve seen people wasting hours like that. Its insane.
Are you teaching in university? Also you said “%99 of students uses ChatGPT”, are there really very few people who don’t use AI?
In classes I taught in university recently I only noticed less than %5 extremely obvious Ai helped papers. The majority is too bad to even be ai, and around 10% of good to great papers.
I’m generally ok with the concept of externalizing memory. You don’t need to memorize something if you memorize where to get the info.
But you still need to learn how to use the data you look up, and determine if it’s accurate and suitable for your needs. Chat gpt rarely is, and people’s blind faith in it is frightening
Sounds like your curriculum needs updating to incorporate the existence of these tools. As I’m sure you know, kids - especially smart ones - are going to look for the lazy solution. An AI-detection arms race is wasting time and energy, plus mostly exercising the wrong skills.
AVID could be a resource for teaching ethics and responsible use of AI. https://avidopenaccess.org/resource/ai-and-the-4-cs-critical-thinking/
I have a guy at work that keeps inserting obvious AI slop into my life and asking me to take it seriously. Usually it’s a meeting agenda that’s packed full of corpo-speak and doesn’t even make sense.
I’m a software dev and copilot is sorta ok sometimes, but also calls my code a hack every time I start a comment and that hurts my feelings.
I used it once to write a polite “fuck off” letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can’t imagine people just using whatever it spots out.
just made it bland, and simplified
Not always, but for the most part, you need to tell it more about what you’re looking for. Your prompts need to be deep and clear.
“change it to a relaxed tone, but make it make me feel emotionally invested, 10th grade reading level, add descriptive words that fit the text, throw an allegory, and some metaphors” The more you tell it, the more it’ll do. It’s not creative. It’s just making it fit whatever you ask it to do. If you don’t give enough direction, you’ll just get whatever the random noise rolls, which isn’t always what you’re looking for. It’s not uncommon to need to write a whole paragraph about what you want from it. When I’m asking it for something creative, sometimes it takes half a dozen change requests. Once in a while, I’ll be so far off base, I’ll clear the conversation and just try again. The way the random works, it will likely give you something completely different on the next try.
My favorite thing to do is give it a proper outline of what I need it to write, set the voice, tone, objective, and complexity. Whatever it gives back, I spend a good solid paragraph critiquing it. when it’s > 80% how I like it, I take the text and do copy edits on it until I’m satisfied.
It’s def not a magic bullet for free work. But it can let me produce something that looks like I spent an hour on it when I spent 20 minutes, and that’s not nothing.
Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre’s entrance, with the previously described characters reacting in their own ways.
I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I’m short on time.
My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it’s a super valuable tool.
It’s changed my job: I now have to develop stupid AI products.
It has changed my life: I now have to listen to stupid AI bros.
My outlook: it’s for the worst; if the LLM suppliers can make good on the promises they make to their business customers, we’re fucked. And if they can’t then this was all a huge waste of time and energy.
Alternative outlook: if this was a tool given to the people to help their lives, then that’d be cool and even forgive some of the terrible parts of how the models were trained. But that’s not how it’s happening.
After 2 years it’s quite clear that LLMs still don’t have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.
Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don’t use it for anything that has to do communication, I find it unnecessary and disrespectful, since it’s quite clear when the output is from a LLM.
For these reasons, I generally think it’s a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce “AI” (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don’t understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.
Other than endless posts from the general public telling us how amazing it is, peppered with decision makers using it to replace staff and then the subsequent news reports how it told us that we should eat rocks, or some variation thereof, there’s been no impact whatsoever in my personal life.
In my professional life as an ICT person with over 40 years experience, it’s helped me identify which people understand what it is and more specifically, what it isn’t, intelligent, and respond accordingly.
The sooner the AI bubble bursts, the better.
I fully support AI taking over stupid, meaningless jobs if it also means the people that used to do those jobs have financial security and can go do a job they love.
Software developer Afas has decided to give certain employees one day a week off with pay, and let AI do their job for that day. If that is the future AI can bring, I’d be fine with that.
Caveat is that that money has to come from somewhere so their customers will probably foot the bill meaning that other employees elsewhere will get paid less.
But maybe AI can be used to optimise business models, make better predictions. Less waste means less money spent on processes which can mean more money for people. I then also hope AI can give companies better distribution of money.
This of course is all what stakeholders and decision makers do not want for obvious reasons.
The thing that’s stopping anything like that is that the AI we have today is not intelligence in any sense of the word, despite the marketing and “journalism” hype to the contrary.
ChatGPT is predictive text on steroids.
Type a word on your mobile phone, then keep tapping the next predicted word and you’ll have some sense of what is happening behind the scenes.
The difference between your phone keyboard and ChatGPT? Many billions of dollars and unimaginable amounts of computing power.
It looks real, but there is nothing intelligent about the selection of the next word. It just has much more context to guess the next word and has many more texts to sample from than you or I.
There is no understanding of the text at all, no true or false, right or wrong, none of that.
AI today is Assumed Intelligence
Arthur C Clarke says it best:
“Any sufficiently advanced technology is indistinguishable from magic.”
I don’t expect this to be solved in my lifetime, and I believe that the current methods of"intelligence " are too energy intensive to be scalable.
That’s not to say that machine learning algorithms are useless, there are significant positive and productive tools around, ChatGPT and its Large Language Model siblings not withstanding.
Source: I have 40+ years experience in ICT and have an understanding of how this works behind the scenes.
I think you’re right. AGI and certainly ASI are behind one large hurdle: we need to figure out what consciousness is and how we can synthesize it.
As Qui-Gon Jinn said to Jar Jar Binks: the ability to speak does not make you intelligent.
we need to figure out what consciousness is
Nah, “consciousness” is just a buzzword with no concrete meaning. The path to AGI has no relevance to it at all. Even if we develop a machine just as intelligent as human beings, maybe even moreso, that can solve any arbitrary problem just as efficiently, mystics will still be arguing over whether or not it has “consciousness.”
Edit: You can downvote if you want, but I notice none of you have any actual response to it, because you ultimately know it is correct. Keep downvoting, but not a single one of you will actually reply and tell us me how we could concretely distinguish between something that is “conscious” and something that isn’t.
Even if we construct a robot that fully can replicate all behaviors of a human, you will still be there debating over whether or not is “conscious” because you have not actually given it a concrete meaning so that we can identify if something actually has it or not. It’s just a placeholder for vague mysticism, like “spirit” or “soul.”
I recall a talk from Daniel Dennett where he discussed an old popular movement called the “vitalists.” The vitalists used “life” in a very vague meaningless way as well, they would insist that even if understand how living things work mechanically and could reproduce it, it would still not be considered “alive” because we don’t understand the “vital spark” that actually makes it “alive.” It would just be an imitation of a living thing without the vital spark.
The vitalists refused to ever concretely define what the vital spark even was, it was just a placeholder for something vague and mysterious. As we understood more about how life works, vitalists where taken less and less serious, until eventually becoming largely fringe. People who talk about “consciousness” are also going to become fringe as we continue to understand neuroscience and intelligence, if scientific progress continues, that is. Although this will be a very long-term process, maybe taking centuries.
we need to figure out what consciousness is and how to synthesize it
We don’t know what it is. We don’t know how it works. That is why
“consciousness” is just a buzzword with no concrete meaning
You’re completely correct. But you’ve gone on a very long rant to largely agree with the person you’re arguing against. Consciousness is poorly defined and a “buzzword” largely because we don’t have a fucking clue where it comes from, how it operates, and how it grows. When or if we ever define that properly, then we have a launching off point to compare from and have some hope of being able to engineer a proper consciousness in an artificial being. But until we know how it works, we’ll only ever do that by accident, and even that is astronomically unlikely.
We don’t know what it is. We don’t know how it works. That is why
If you cannot tell me what you are even talking about then you cannot say “we don’t know how it works,” because you have not defined what “it” even is. It would be like saying we don’t know how florgleblorp works. All humans possess florgleblorp and we won’t be able to create AGI until we figure out florgleblorp, then I ask wtf is florgleblorp and you tell me “I can’t tell you because we’re still trying to figure out what it is.”
You’re completely correct. But you’ve gone on a very long rant to largely agree with the person you’re arguing against.
If you agree with me why do you disagree with me?
Consciousness is poorly defined and a “buzzword” largely because we don’t have a fucking clue where it comes from, how it operates, and how it grows.
You cannot say we do not know where it comes from if “it” does not refer to anything because you have not defined it! There is no “it” here, “it” is a placeholder for something you have not actually defined and has no meaning. You cannot say we don’t know how “it” operates or how “it” grows when “it” doesn’t refer to anything.
When or if we ever define that properly
No, that is your first step, you have to define it properly to make any claims about it, or else all your claims are meaningless. You are arguing about the nature of florgleblorp but then cannot tell me what florgleblorp is, so it is meaningless.
This is why “consciousness” is interchangeable with vague words like “soul.” They cannot be concretely defined in a way where we can actually look at what they are, so they’re largely irrelevant. When we talk about more concrete things like intelligence, problem-solving capabilities, self-reflection, etc, we can at least come to some loose agreement of what that looks like and can begin to have a conversation of what tests might actually look like and how we might quantify it, and it is these concrete things which have thus been the basis of study and research and we’ve been gradually increasing our understanding of intelligent systems as shown with the explosion of AI, albeit it still has miles to go.
However, when we talk about “consciousness,” it is just meaningless and plays no role in any of the progress actually being made, because nobody can actually give even the loosest iota of a hint of what it might possibly look like. It’s not defined, so it’s not meaningful. You have to at least specify what you are even talking about for us to even begin to study it. We don’t have to know the entire inner workings of a frog to be able to begin a study on frogs, but we damn well need to be able to identify something as a frog prior to studying it, or else we would have no idea that the thing we are studying is actually a frog.
You cannot study anything without being able to identify it, which requires defining it at least concretely enough that we can agree if it is there or not, and that the thing we are studying is actually the thing we aim to study. We should I believe your florgleblorp, sorry, I mean “consciousness” you speak of, even exists if you cannot even tell me how to identify it? It would be like if someone insisted there is a florgleblorp hiding in my room. Well, I cannot distinguish between a room with or without a florgleblorp, so by Occam’s razor I opt to disbelieve in its existence. Similarly, if you cannot tell me how to distinguish between something that possesses this “consciousness” and something that does not, how to actually identify it in reality, then by Occam’s razor I opt to disbelieve in its existence.
It is entirely backwards and spiritualist thinking that is popularized by all the mystics to insist that we need to study something they cannot even specify what it is first in order to figure out what it is later. That is the complete reversal of how anything works and is routinely used by charlatans to justify pseudoscientific “research.” You have to specify what it is being talked about first.
and let AI do their job for that day.
What? How does that work?
It writes all the bugs so the engineer can fix it over the following 4 days
Usually these tasks are repetitive, scriptable. I don’t know exactly what happens but I suppose AI will just cough up a lot of work and employees come in on Monday and just have to check it. In some cases that would be more work than just making it yourself but this is a first step at least.
For work, I teach philosophy.
The impact there has been overwhelmingly negative. Plagiarism is more common, student writing is worse, and I need to continually explain to people at an AI essay just isn’t their work.
Then there’s the way admin seem to be in love with it, since many of them are convinced that every student needs to use the LLMs in order to find a career after graduation. I also think some of the administrators I know have essentially automated their own jobs. Everything they write sounds like GPT.
As for my personal life, I don’t use AI for anything. It feels gross to give anything I’d use it for over to someone else’s computer.
My son is in a PhD program and is a TA for a geophysics class that’s mostly online, so he does a lot of grading assignments/tests. The number of things he gets that are obviously straight out of an LLM is really disgusting. Like sometimes they leave the prompt in. Sometimes the submit it when the LLM responds that it doesn’t have enough data to give an answer and refers to ways the person could find out. It’s honestly pretty sad.
Main effect is lots of whinging on Lemmy. Other than that, minimal impact.
I got into linux right around when it was first happening, and I dont think I would’ve made it through my own noob phase if i didnt have a friendly robot to explain to me all the stupid mistakes I was making while re-training my brain to think in linux.
probably a very friendly expert or mentor or even just a regular established linux user could’ve done a better job, the ai had me do weird things semi-often. but i didnt have anyone in my life that liked linux, let alone had time to be my personal mentor in it, so the ai was a decent solution for me
As a software developer, the one usecase where it has been really useful for me is analyzing long and complex error logs and finding possible causes of the error. Getting it to write code sometimes works okay-ish, but more often than not it’s pretty crap. I don’t see any use for it in my personal life.
I think its influence is negative overall. Right now it might be useful for programming questions, but that’s only the case because it’s fed with Human-generated content from sites like Stackoverflow. Now those sites are slowly dying out due to people using ChatGPT and this will have the inverse effect that in the future, AI will have less useful training data which means it’ll become less useful for future problems, while having effectively killed those useful sites in the process.
Looking outside of my work bubble, its effect on academia and learning seems pretty devastating. People can now cheat themselves towards a diploma with ease. We might face a significant erosion of knowledge and talent with the next generation of scientists.
I wish more people understood this. It’s short term, mediocre gains, at the cost of a huge long term loss, like stack overflow.
I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.
And, even if they pull it, I don’t think I’ll ever go back. No more cloud drives, no more ‘apps’. Webpages and local files on a file share I own and host.
I have a book that I’m never going to write, but I’m still making notes and attempting to organize them into a wiki.
using almost natural conversation, i can explain a topic to the gpt, make it ask me questions to get me to write more, then have it summarize everything back to me in a format suitable for the wiki. In longer conversations, it will also point out possible connections between unrelated topics. It does get things wrong sometimes though, such as forgetting what faction a character belongs to.
I’ve noticed that gpt 4o is better for exploring new topics as it has more creative freedom, and gpt o1 is better for combining multiple fragmented summaries as it usually doesn’t make shit up.
AI has completely killed my desire to teach writing at the community college level.
Agreed. I started steps needed to be certified as an educator in my state but decided against it. ChatGPT isn’t the only reason, but it is a contributing factor. I don’t envy all of the teachers out there right now who have to throw out the entire playbook of what worked in the past.
And I feel bad for students like me who really struggled with in-class writing by hand in a limited amount of time, because that is what everyone is resorting to right now.
It must be something like(only worse) what math teachers felt when the pocket calculator became cheap and easily available. It doesn’t mean you can do math but people conflate the two.
It cost me my job (partially). My old boss swallowed the AI pill hard and wanted everything we did to go through GPT. It was ridiculous and made it so things that would normally take me 30 seconds now took 5-10 minutes of “prompt engineering”. I went along with it for a while but after a few weeks I gave up and stopped using it. When boss asked why I told her it was a waste of time and disingenuous to our customers to have GPT sanitize everything. I continued to refuse to use it (it was optional) and my work never suffered. In fact some of our customers specifically started going through me because they couldn’t stand dealing with the obvious AI slop my manager was shoveling down their throat. This pissed off my manager hard core but she couldn’t really say anything without admitting she may be wrong about GPT, so she just ostracized me and then fired me a few months later for “attitude problems”.
im sorry.
managers tend to be useless fucking idiots.
Curious - what type of job was this? Like, how was AI used to interact with your customers?
It was just a small e-commerce store. Online sales and shipping. The boss wanted me to run emails i would send to vendors through gpt and any responses for customer complaints were put through GPT. We also had a chat function on our site for asking questions and what not and the boss wanted us to copy the customers’ chat into gpt, get a response, rewrite if necessary, and then paste GPT’s response into our chat. It was so ass backwards I just refused to do it. Not to mention it made the response times super high, so customers were just leaving rather than wait (which of course was always the employees fault).
That sounds as asinine as you seem to think it was. Damn dude. What a dumb way to do things. You’re better off without that stupidity in your life