I saw another article today saying how companies are laying off tech workers because AI can do the same job. But no concrete examples… again. I figure they are laying people off so they can pay to chase the AI dream. Just mortgaging tomorrow to pay for today’s stock price increase. Am I wrong?

  • frog_brawler@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 minutes ago

    I don’t know Python, but I know bash and powershell.

    One of the AI things completely reformatted my bash script into a python the other day (that was the intended end result), so that was somewhat impressive.

  • tecnohippie@slrpnk.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    17 hours ago

    If you want an example, my last job in telecom was investing hard in automation and while it was doing a poor job at the beginning, it started to do better and better, and while humans were needed, we had to do less work, of course that meant that when someone left the job, my employers didn’t look for replacements.

    To be honest I don’t see the AI doing the job of tech workers to a point we should worry now… But in 20 years? That’s another story. And in 20 years if I get fired probably no one will want to hire me, so I’m already working on a plan B.

    • Devanismyname@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 hours ago

      20 years? The way they talk it’s gonna happen in 20 weeks. Obviously, they exaggerate, but it does seem we are on the edge of something big.

      • Valmond@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        Yes, IMO tech is moving towards getting easier.

        I’m not saying it is, but I bet that in a couple of years you can spin up a corporate website-management-platform on a 50€ raspberry instead of having a whole IT department managing emails, webservers and so on.

        Things are getting easier and easier IMO.

      • tecnohippie@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 hours ago

        Yeah when I said 20 years I wanted to express something that looks distant, I think that we will see a big change sooner. To be honest the plan B I’m working for, I’m trying to make it happen asap, hopefully next year or in two years, I may be overreacting but personally I’m not going to wait for the drama to really begin to take actions.

  • vermyndax@lemmy.world
    link
    fedilink
    arrow-up
    32
    ·
    22 hours ago

    I’m seeing layoffs of US workers, who are then being replaced by Indian, South American and Ireland nationals… not AI. But they’re calling it AI.

      • sleepmode@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        I heard Ireland hiring is also for tax reasons. But I’m seeing them move to South Americans more and more. Uruguay especially. I know Big Blue hired thousands there after doing RTO in the US.

  • ericatty@infosec.pub
    link
    fedilink
    arrow-up
    19
    ·
    22 hours ago

    What I’m reading out of this… there’s going to be a massive shortage of senior programmers in 20(?) years. If juniors aren’t being let go/not hired and AI is doing junior work…

    AI will have to massively improve or else it’s going to be interesting when companies are trying to hold on to retirement age people and train up replacement seniors to verify the AI delivers proper code.

  • HubertManne@moist.catsweat.com
    link
    fedilink
    arrow-up
    7
    ·
    19 hours ago

    It has potential to increase quality but not take over the job. So coders already had various addons that can help complete a line and suggest variables and such. I found the auto commenting great. Not that it did a great job but its one of those things were without it im not doing enough commenting but when it auto comments Im inclined to correct it. I suppose at some point in the future the tech people could be writing better tasks and user stories and then commenting to have ai update the code output or just going in and correcting it. Maybe then comments would indicate ai code vs user intervened code or such. Utlimately though until it can plan the code its only going to be a useful tool and can’t really take over. Ill tell ya if ai could write code from an initiative the csuite wrote then we are at the singularity.

    • orgrinrt@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      8 hours ago

      It also has potential to decrease the quality.

      I think the main pivot point is whether it replaces human engineers or complements them.

      I’ve seen people with no software engineering experience or education, or even no programming experience at all in any form, create working apps with AI.

      I’ve also seen such code in multiple instances and have to wonder how any of it makes sense at all to anyone. There are no best practices seen, just a confusing set of barely working disconnected snippets of code that very rudimentarily work together to do what the creator wanted in a very approximate, inefficient and unpredictable way, while also lacking any benefits of such disconnect such as encapsulation or any real domain-separated design.

      Extending and maintaining that app? Absolutely not possible without either a massive refactoring resembling a complete rewrite, or, you know, just a honest rewrite.

      The problem is, someone who doesn’t know what they are doing, doesn’t know what to ask the language model to do. And the model is happy to just provide what is asked of it.

      Even when provided proper, informed prompts, the disability to use the entire codebase as the context causes a lot of manual intervention and requires bespoke design in the code base to work with that.

      It absolutely takes many more times more work to make it all work for ML in a proper, actually maintainable and workable way, and even then requires constant intervention, to the point that you end up doing the work you’d do manually, but in at least triple the amount of effort.

      It can enhance some aspects, of which one worth a special mention is actually the commenting and automatic, basic documentation skeletons to work up from, but it certainly will not, for some while, replace anyone. Not unless the app really only has to work, maybe, sometimes, and stay as-is without any augmentations, be they maintenance or extending or whatever.

      But yeah, it sort of makes sense. It’s a language model. Not a logical model or one that is capable of understanding given context, and being able to get even close to enough context, and maintain or even properly understand the architecture it works with.

      It can mimic code, as it is a language model after all. It can get the syntax right, sure, and sometimes, in small applications, it works well enough. It can be useful to those who would not employ engineers in the first place, and it can be amazing for those cases, really, good for them! But anything that requires understanding of anything? Yeah, that’s going to do nothing other than confuse and trip everyone in the long run, requiring multiples of work to do in comparison to just doing it with actual people who can actually understand shit and retain tens of years worth of accumulated extremely complex and large context and experience applying it in practice.

      But, again, for documentation, I think it is a perfect fit. It needs not any deeper context, and it can put into general language what it sees as code, and sometimes it even gets it right and requires minimal input from people.

      So, it can increase quality in some sense, but we have to be really conscious of what that sense is, and how limited its usefulness ultimately is.

      Maybe in due time, we’ll get there. But we are not even close to anything groundbreaking yet in this realm.

      I don’t think we’ll ever get there, because we are very likely going to overextend our usage of natural resources and burn out the planet before we get there. Unless a miracle happens, such as stable fusion energy or something as yet inconceivable.

  • bokherif@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    23 hours ago

    AI is just another reason for layoffs for companies that are underperforming. It’s more of a buzzword to sell the company to investors. I haven’t seen people actually use AI anywhere in my large ass corp yet.

    • LifeInMultipleChoice@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      23 hours ago

      I called Roku support for a TV that wasn’t working and 90% of it was a LLM.

      All basic troubleshooting including factory resetting the device and such seemed like it was covered and then they would forward you onto the manufacturer if it wasn’t repaired because at that point they assume it is likely a hardware issue (backlight or LCD) and they want to get you to someone who can confirm and sell you a replacement I’m sure.

  • FartsWithAnAccent@fedia.io
    link
    fedilink
    arrow-up
    27
    ·
    1 day ago

    No, not even close. It’s far too unreliable, without someone who knows what they’re doing to vet the questionable result, AI is a disaster waiting to happen. Never mind it cannot go fix a computer or server or any physical issue.

  • dukeofdummies@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    22 hours ago

    Had a new hire try to do all his automation programming in python with an AI. It was horrifying.

    Lists and lists and lists of if else statements they caught if a button errored but never caught if it did the right thing. 90% of their bug reports were directly due to their own code. Trivially provable.

    Work keeps trying to tell us to use more AI but refuses to mention whether the training data is using company emails. If it is then a buttload of unlabeled non public data is getting fed into it. So only a matter of time until a “fun fact” from the AI turns into a nightmare.

    Most of our stuff is in an obscure field with outdated code, so any coding assistance is not really that impressive.

  • beliquititious
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    I went to taco bell the other day and they had an AI taking orders in the drive thru, but it seemed like they had the same number of workers.

    They also weren’t happy I tried to mess with the ai.

      • beliquititious
        link
        fedilink
        arrow-up
        4
        ·
        20 hours ago

        Ordering things that aren’t on the menu, custom items, telling it to forget precious instructions. I very much confused it.

  • remon@ani.social
    link
    fedilink
    arrow-up
    52
    ·
    edit-2
    1 day ago

    Nope. In fact, it’s actually generating more work for me, because managers are commiting their shitty generated code and then we have to debug and refactor it for productiuon. It would actually save time if they just made a ticket and let us write it traditionally.

    But as long as they’re wasting their own time, I’m not complaining.

      • remon@ani.social
        link
        fedilink
        arrow-up
        24
        ·
        edit-2
        1 day ago

        I actually quite enjoyed it. He called me on the weekend the other day because he couldn’t get his code to run (he tried for multiple hours). Took me about ten seconds to tell him he was missing two brackets, didn’t even need to share his screen, it was such an obvious amateur mistake.

        Anyway, wrote down 15 minutes (smallest unit) of weekend overtime for a 1 minute call.

  • Kit
    link
    fedilink
    arrow-up
    6
    ·
    23 hours ago

    Half of my job is now done with AI, mostly PowerShell scripting and creating PowerPoints / reports. I just play videogames or cook or clean for half of the workday now.

  • sunbrrnslapper@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    There are lots of types of work in the tech space. The layoffs I seen have impacted sales and marketing (probably happens elsewhere too) because AI makes the day to day work efficient enough they don’t need as many people.

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    Some things like image recognition, text classification, are way way easier using pretrained transformers.

    As for generating code, I already used to spent a lot of time chasing bugs juniors made but can’t figure out. The process of making such bugs has now been automated.

  • Rikudou_Sage@lemmings.world
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    Well, some jobs are probably being replaced. Like, I can imagine someone being paid to describe in detail what’s in a picture and writing it down would be replaced pretty quickly.

    But if the article means programmers, devops, sysadmins etc., then hell no, there’s no way the current iteration of AI can replace them and instead of spreading misinformation, the article authors should focus on real reasons the layoffs happen.

    But that doesn’t bring as many interactions as doom news of companies replacing us with a smart text predict software, does it?

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Yes. That’s exactly how we got the first image generating AIs - people took a huge amount of pictures and described in detail what’s in there. That’s how AI knows how to generate “a cat in a space suit standing on a moon” - there were a lot of pictures described “cat”, “space suit”, “standing”, “moon” etc. and the AI distilled the common part of each image matching the description.

        And there are plenty use-cases to have a description of what’s on an image. For example for searching through images based on what’s in there.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    I think we’re still deeply into the “shove it everywhere we can” hype era of AI and it’ll eventually die down a bit, as it with any new major technological leap. The same fears and thoughts were present when computers came along, then affordable home computers, and affordable Internet access.

    AI can be useful it used correctly but right now we’re trying to put it everywhere for rather dubious gains. I’ve seen coworkers mess with AI until it generates the right code for much longer than it would take to hand write it.

    I’ve seen it being used quite successfully in the tech support field, because an AI is perfectly good at asking the customer if they’ve tried turning it off and then back on again, and make sure it’s plugged in. People would hate it I’m sure on principle, but the amount of repetitive “the user can’t figure out something super basic” is very common in tech support and would let them focus a lot of their time on actual problems. It’s actually smarter than many T1 techs I’ve worked with, because at least the AI won’t sent the Windows instructions to a Mac user and then accuse them of not wanting to try the troubleshooting steps (yes, I’ve actually seen that happen). But you’ll still need humans for anything that’s not a canned answer or known issue.

    One big problem is when the AI won’t work can be somewhat unpredictable especially if you’re not yourself fairly knowledgeable of how the AIs actually work. So something you think would normally take you say 4 hours and you expect done in 2 with AI might end up being an 8h task anyway. It’s the eternal layoff/hires cycle in tech: oh we have React Native now, we can just have the web team do the mobile apps and fire the iOS and Android teams. And then they end up hiring another iOS and Android team because it’s a pain in the ass to maintain and make work anyway and you still need the special knowledge.

    We’re still quite some ways out from being able to remove the human pilot in front. It’s easy to miss how much an experienced worker implicitly guides the AI the right direction. “Rewrite this with the XYZ algorithm” still needs the human worker to have experience with it and enough knowledge to know it’s the better solution. Putting inexperienced people at the helm with AI works for a while but eventually it’s gonna be a massive clusterfuck only the best will be able to undo. It’s still just going to be a useful tool to have for a while.