• very_well_lost@lemmy.world
    link
    fedilink
    English
    arrow-up
    242
    ·
    edit-2
    1 month ago

    It’s time to stop taking any CEO at their word.

    Edit: scratch that, the time to stop taking any CEO at their word was 100 years ago.

      • Zorque@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 month ago

        Considering how fractured medical billing is these days, often the techs contracted by your in-network doctors office are actually out-of-network.

        Isn’t medical billing fun?

      • peopleproblems@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 month ago

        Yeah this might actually not be that far from reality. Computer vision already did a large amount of the lifting, with the massive pushes towards AI, AI will take the rest of us plebians healthcare.

  • sartalon@lemmy.world
    link
    fedilink
    English
    arrow-up
    79
    ·
    1 month ago

    When that major drama unfolded with him getting booted then re-hired. It was super fucking obvious that it was all about the money, the data, and the salesmanship He is nothing but a fucking tech-bro. Part Theranos, part Musk, part SBF, part (whatever that pharma asshat was), and all fucking douchebag.

    AI is fucking snake oil and an excuse to scrape every bit of data like it’s collecting every skin cell dropping off of you.

    • Rogers@lemmy.ml
      link
      fedilink
      English
      arrow-up
      30
      ·
      edit-2
      1 month ago

      I’d agree the first part but to say all Ai is snake oil is just untrue and out of touch. There are a lot of companies that throw “Ai” on literally anything and I can see how that is snake oil.

      But real innovative Ai, everything to protein folding to robotics is here to stay, good or bad. It’s already too valuable for governments to ignore. And Ai is improving at a rate that I think most are underestimating (faster than Moore’s law).

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 month ago

        I think part of the difficulty with these discussions is that people mean all sorts of different things by “AI”. Much of the current usage is that AI = LLMs, which changes the debate quite a lot

        • Rogers@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from ClosedAI OpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.

          Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.

          Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.

          • kaffiene@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 month ago

            I don’t doubt they’ll get faster. What I wonder is whether they’ll ever stop being so inaccurate. I feel like that’s a structural feature of the model.

            • keegomatic@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.

                • keegomatic@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 month ago

                  Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.

                  • “I’ve written this thing. Criticize it as if you were the recipient/judge of that thing. How could it be improved?” (Then address its criticisms in your thing… it’s surprisingly good at revealing ways to make your “thing” better, in my experience)
                  • “I have this personal problem.” (Tell it to keep responses short. Have a natural conversation with it. This is best done spoken out loud if you are using ChatGPT; prevents you from overthinking responses, and forces you to keep the conversation moving. Takes fifteen minutes or more but you will end up with some good advice related to your situation nearly every time. I’ve used this to work out several things internally much better than just thinking on my own. A therapist would be better, but this is surprisingly good.)
                  • I’ve also had it be useful for various reasons to tell it to play a character as I describe, and then speak to the character in a pretend scenario to work out something related. Use your imagination for how this might be helpful to you. In this case, tell it to not ask you so many questions, and to only ask questions when the character would truly want to ask a question. Helps keep it more normal; otherwise (in the case of ChatGPT which I’m most familiar with) it will always end every response with a question. Often that’s useful, like in the previous example, but in this case it is not.
                  • etc.

                  For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.

                  For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.

    • stringere@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 month ago

      Martin Shkreli is the scumbag’s name you’re looking for.

      From wikipedia: He was convicted of financial crimes for which he was sentenced to seven years in federal prison, being released on parole after roughly six and a half years in 2022, and was fined over 70 million dollars

  • Soup@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    76
    ·
    1 month ago

    Yeah. It sucks I had to be downvoted into irrelevance way back when this clown was first becoming worshipped by the tech bros.

    I don’t take pride in patting myself on the back, but I was fucking right all along about this douche.

    • FlorianSimon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 month ago

      The day of reckoning is approaching fast. May this teach a lesson to my fellow techies that tech billionaires aren’t any better than the other billionaires. I hope there won’t be another cryptoscam after LLMs 🤷‍♀️

      Or, if there’s another one, I hope that it won’t consume massive amounts of energy. If techbros only hurt themselves, I suppose it’s fine.

      • locuester@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Both crypto and LLMs are new, disruptive tech. The chaos around them is expected.

        Which cryptoscam are you referring to? Theres hundreds daily lol

        • jibbist@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Crypto and blockchain is tech coming up with a solution that no one asked for. Blockchain is just a database that is (at best!) extremely energy inefficient. Trust comes from the same sources (brand, marketing, advertising, social cues), it being on a blockchain does not magically generate trust.

          And crypto’s biggest strength as an uncontrollable and decentralised store of wealth ignore the fact you can only buy and sell it on marketplaces, which control and centralise it, so for nearly everyone involved it’s a pyramid scheme, those at the beginning persuading new people to join to prop up their assets profits

          • locuester@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Do you mind if I explain a little more about decentralized ledger technology to help you understand the tech, and correct some of your mistaken understanding?

  • sketelon@eviltoast.org
    link
    fedilink
    English
    arrow-up
    57
    ·
    1 month ago

    Really? The guy behind the company called “Open” AI that has contributed the least to the open source AI communities, while constantly making grand claims and telling us we’re not ready to see what he’s got. We’re supposed to stop taking that guys word?

    Wow, thanks journalists, what would we do without you.

    • 5dh@lemmy.zip
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 month ago

      Should your disappointment here really be pointed at the journalists?

    • MouseKeyboard@ttrpg.network
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      People talk a lot about the genericisation of brand names, but the branding of generic terms like this really annoys me.

      I’ll use the example I first noticed. A few years ago, the Conservative government was under criticism for the minimum wage being well under a living wage. In response, they brought in the National Living Wage, which was an increase to the minimum wage, but still under the actual living wage. However, because of the branding, it makes criticising it for not meeting the actual living wage more difficult, as you have to explain the difference between the two, and as the saying goes, “if you’re explaining, you’re losing”.

  • droopy4096@lemmy.ca
    link
    fedilink
    English
    arrow-up
    40
    ·
    1 month ago

    not only does he burn through cash, he burns through resources making life worse now for everybody: AI rivals crypto in resource waisting while not contributing at all to any improvements. I fail to see “brighter future” for us through AI as it is energy-intensive, unsustainable endeavor for which we are woefully unprepared both materially (energy efficiency, semiconductor manufacturing/recycling, etc) and psychologically (ethics etc.). Yeah, grand on paper, terrible in reality

    • TipRing@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 month ago

      What is really annoying is that there are a lot of really good data modeling applications, they are just in research areas. Generative AI is absolutely a waste of resources, but a ton of money and energy is spent on that instead of on the applications that are actually bearing fruit.

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Generative AI is definitely useful - it’s mighty putty. It fills in gaps and sticks things together wonderfully. It let’s you easily do things near impossible before

        It’s also best used sparingly

    • tiny@midwest.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 month ago

      AI is worse than crypto. Most crypto projects use proof of stake which is way more resource efficient than mining. Also the mining that does happen usually happens where there is excess generation instead of azure datacenters

      • droopy4096@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 month ago

        some crypto learned to be efficient, others did not. We still do have crypto-mining botnets. Crypto remains to be useless to humanity and very profitable for few. Same with AI. Same with stock market. Instead of producing something of value we keep on burning through resources while selected few enjoy bonfire others have to fight to stay alive…

  • kinsnik@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 month ago

    the techbros that think that with sufficiently advanced AI we could solve climate change are so stupid. like, we might not have a perfect solution, but we have ideas on how to start to make things better (less car-centric cities, less meat and animal products, more investment in public transport and solar), and it gets absolutely ignored. why would it be different when an AI gives the solution? unless they want the “eat fat-free food and you will be thin” solution to climate change, in which we change absolutely nothing of our current situation but it is magically ecological

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      There was a (fiction) book I was called “all the birds in the sky”. I really liked it. Highly recommend.

      One of the plot threads is a rich tech bro character that’s like “the world is doomed we need to abandon it for somewhere else. Better pour tons of resources into this sci-fi sounding project”. And I’m just screaming at the book “use that money for housing and transport and clean energy you absolute donkey”.

      There are a lot of well understood things we could be doing to make the world better, but they’re difficult for idiotic political reasons. Racism, nimbyism, emotional immaturity, etc.

  • aesthelete@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 month ago

    It’s beyond time to stop believing and parroting that whatever would make your source the most money is literally true without verifying any of it.

  • sunzu2@thebrainbin.org
    link
    fedilink
    arrow-up
    16
    ·
    1 month ago

    I wonder what this clowns daily PR budget is?

    Each one of these fake news stories are generally 15k a pop

    Do you remember when crypto scammer Sam Bankman was running thousands daily for years…

    Similar vibes here