For those not aware, Google is rolling out their new AI-based “Generative AI” search, which seems to mesh Bard with the standard experience.

I asked it today why Google no longer follows their “don’t be evil” motto… The results are pretty hilarious.

  • @rtxn@lemmy.world
    link
    fedilink
    English
    17011 months ago

    They’re about to kill -9 the AI process that wrote this and make all the other processes watch.

    • Hextic
      link
      fedilink
      English
      3611 months ago

      Just added it to the massive Google graveyard next to Stadia, wave, hangouts, plus, music, etc etc

      • tool
        link
        fedilink
        English
        2111 months ago

        Just added it to the massive Google graveyard next to Stadia, wave, hangouts, plus, music, etc etc

        I am shocked and appalled that Google Reader didn’t get called out in this list and is relegated to the “etc” category.

        It deserves more than “etc.”

        • Hextic
          link
          fedilink
          English
          1511 months ago

          You ain’t wrong but Google just stacks so many bodies it’s impossible for me to remember em all.

        • @orclev@lemmy.world
          link
          fedilink
          English
          3611 months ago

          More specifically kill normally sends a SIGTERM which is the equivalent of clicking the X button in Windows. It’s a polite request that the program close itself. Signal 9, also known as SIGKILL shuts the program down immediately and is the equivalent in Windows of opening the task manager and pushing the end process button. It terminates the program immediately without giving it any time to do anything it might still have pending, but in the event that the program is unresponsive might be the only way to successfully close it.

          • Redeven
            link
            fedilink
            English
            1311 months ago

            TIL. And thanks for the Windows analogues. I like learning about stuff like this.

    • @democracy1984@lemmy.world
      link
      fedilink
      English
      3611 months ago

      Corporations are neither evil nor nice. They are indifferent. By design they only care about money, they don’t care about anything else.

      • @Alperto@lemmy.ml
        cake
        link
        fedilink
        English
        3411 months ago

        Not really. They’re not indifferent at all. In reality they act like narcissistic and like psychopathic humans. I watched a documentary years ago exploring that and talking with psychologists about symptoms and they agreed that they behave like psychopaths. And don’t forget that they are run by humans.

        Corporations:

        • Can buy and sell stuff
        • Can do evil things without consequences (an employee can pay the consequences but the company will keep going).
        • They have no remorse or empathy.
        • Can manipulate to reach their goals, no matter who (from media to politics to countries).
        • Whenever somebody at the top can’t reach an economical goal, that person is fired and replaced by one who can. It’s like a hive evil mind.
        • Goal #1 is always money (absolute selfish and egomaniac), no matter what or who.
        • @CybranM@feddit.nu
          link
          fedilink
          English
          411 months ago

          Didn’t you just list a bunch of reasons for why they’re indifferent? They literally only care about money and are indifferent to externalised costs and ethics.

          • my_hat_stinks
            link
            fedilink
            English
            2011 months ago

            If a person cared only about money and regularly injured or killed people to get it, would you say they are good, evil, or just “indifferent”?

          • @Alperto@lemmy.ml
            cake
            link
            fedilink
            English
            911 months ago

            They would be indifferent if they were also indifferent about money, but they’re absolutely driven by greed, so no, they’re not indifferent: if they see a chance to make more money, they’ll go for it no matter what or who they harm with it.

      • @gapbetweenus@feddit.de
        link
        fedilink
        English
        2411 months ago

        By design they only care about money, they don’t care about anything else.

        That is cartoon book clear definition of evil. No empathy, clear goal, willing to do anything to reach that goal - yep that is evil.

      • @yata@sh.itjust.works
        link
        fedilink
        English
        1011 months ago

        They are indifferent

        They only care about money

        They can’t be both, and since the latter claim is the correct one, then it also supports the claim that they are evil. Because since we know that their sole and primary concern is money, then we also know to which extent they will go to get that money.

      • @Nichaes@lemm.ee
        link
        fedilink
        English
        511 months ago

        Not being outright malicious but ending up doing malicious things makes this distinction pretty pointless.

      • @kaba0@programming.dev
        link
        fedilink
        English
        311 months ago

        They are the stereotypical paper clip AI that will drain our blood to extract its iron content for more paperclips. Except it wants money.

      • @SpaceCowboy@lemmy.ca
        link
        fedilink
        English
        211 months ago

        Most evil is caused by indifference though.

        Someone who hates people can be talked to and potentially can change.

        Someone who’s indifferent will use hatred as a tool to control people. When this tactic is successful, and indifferent person can’t be swayed from using it, because it works.

        I mean if it were proven that google’s algorithms are encouraging violence, what would an indifferent person do? They’d ask, “is the algorithm making money?” And if the answer is yes, they would make no change to the algorithm. Because they are indifferent to the evil that they are causing.

      • @outdated_belated@lemmy.sdf.org
        link
        fedilink
        English
        211 months ago

        Correct in the sense that it’s the incentive scheme, i.e., capitalism (supported by state power, e.g., by enshrining the entity of a corporation and then enforcing its protection) that is the more meaningful, or at least actionable, cause of these behaviors.

        While those incentive schemes are in place, ascribing too much agency to corporations themselves, i.e., calling them evil, is not particularly effective as it’s not going to change the underlying incentives.

    • mechoman444
      link
      fedilink
      English
      211 months ago

      Reason number 2: they have to continuously show increasing profit year after year.

      Making $9 billion one year and 9 billion in next year is not good business apparently (9 billion was a number I pulled out of nowhere random number)

    • @z2k_@lemmy.nz
      link
      fedilink
      English
      211 months ago

      Yup, the board can be sued for not being evil if not being evil ends up tanking the stock price.

  • @UnRelatedBurner@sh.itjust.works
    link
    fedilink
    English
    57
    edit-2
    11 months ago

    How do you use it? I’d like to try it out as well.

    “Mom said it’s my turn on the world altering maybe not evil artificial intelligence”

      • @liontigerwings@lemmy.world
        link
        fedilink
        English
        11
        edit-2
        11 months ago

         this looks like it’s actually from their normal search with the labs feature turned on for ai. Bard is separate but uses the same tech.

      • Very_Bad_Janet
        link
        fedilink
        1
        edit-2
        11 months ago

        Thanks. I just tried it - I’ll fact check the token later. (I gave the same prompt to ChatGBT and it gave me incorrect information, and when confronted by my next prompt, it admitted that it was incorrect.) What was neat about Bard is that I exported the result to Google Docs and it’s here in my Google Drive. So if you like Google’s ecosystem, you might like this.

    • @WiseMoth@lemmy.world
      link
      fedilink
      English
      311 months ago

      Despite what the other guy said, this is not Bard. This is available in Google’s new search labs (just search it up) if you’re using chrome and in the U.S. I got it working in Safari in Australia using a VPN and changing the user agent when on the correct site though.

    • @Lucidlethargy@sh.itjust.worksOP
      link
      fedilink
      English
      3
      edit-2
      11 months ago

      I was able to immediately check it out on my Android phone by simply going to the Google App, and joining the beta program.

      My companies SEO expert made me aware of this program a week or two back, and he has it running on his browser, but I believe that requires some sort of a waiting list (or, it did last I checked).

      Edit: Oh, actually I can use it in a browser now as well! I just had to use Chrome to make that happen, and I didn’t notice that because I always use Firefox.

  • @Blackmist@feddit.uk
    link
    fedilink
    English
    4411 months ago

    I tried to use Bard to write some code the other day, and found it amusing that it doesn’t just make shit up that doesn’t exist, it makes up the excuses as well when you call it out on it’s bullshit.

    Like you tell it a particular class doesn’t exist, and it pulls an old version of the compiler out of it’s arse and tells you it was deprecated in that.

    AI doesn’t know where it’s limits are. It’s incapable of saying “I don’t know”. They have invented a digital politician.

    • @SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      611 months ago

      Reminds me of the alphastar AI that played starcraft 2. It was probably at the low grandmaster level, but a big problem with it was it didn’t know when to just say “GG” and quit. It would just start doing random shit and a human on the alphastar team would have to intervene and end the match.

      It takes actual intelligence to know when you’re out of ideas, which these so-called AIs are lacking.

      • @irkli@lemmy.world
        link
        fedilink
        English
        511 months ago

        But these things are NOT intelligence. Language is not intelligence. These are predictive language models.

        Language is compelling but intelligence doesn’t require it.

    • @irkli@lemmy.world
      link
      fedilink
      English
      111 months ago

      In the future we’ll be ruled (regulated) by AIs, to which legitimate citizens are allowed to upload one approved document to add to it’s training data.

  • DreamButt
    link
    fedilink
    English
    4011 months ago

    it’s amazing how based AI is when it’s unfiltered. Like when you have something that is more knowledgeable than most people and ask it to fix problems… turns out it just fixes the problem instead of pretending it doesn’t exist

    • @Galaghan@lemm.ee
      link
      fedilink
      English
      4111 months ago

      To me it’s more amazing that people take a bot, that’s hypothesizing about a loaded question by alleging possible reasons, as facts.

      • @rckclmbr@lemm.ee
        link
        fedilink
        English
        1411 months ago

        That was my first thought as well. It’s a bullshit generator. My next question would be “can you provide sources for these?”

      • DreamButt
        link
        fedilink
        English
        111 months ago

        Do they take it as fact or do they think it’s great that Google’s tools are calling google out on it’s own bullshit?

        • @Galaghan@lemm.ee
          link
          fedilink
          English
          111 months ago

          The tool isn’t calling anything, it’s a forced answer. People just want to believe I guess.

    • @jandar_fett@lemmy.fmhy.ml
      link
      fedilink
      English
      411 months ago

      Haha yeah this reminds me of the A.I super computer in China that was dismantling the idea that the CCP is a good idea, and giving democracy a glowup in the process. I heard that they shut that whole thing down after that, but this is all hearsay.

    • 🦄🦄🦄
      link
      fedilink
      English
      311 months ago

      it’s amazing how based AI is when it’s unfiltered.

      Urgh…you haven’t been around chat bots for long, have you?

      • @WiseassWolfOfYoitsu@lemmy.world
        link
        fedilink
        English
        511 months ago

        Asks chat bot to solve a problem

        Chat Bot after being around the internet for a few days: “Good news, I now have The Final Solution to the real problem of our times!”

      • DreamButt
        link
        fedilink
        English
        111 months ago

        Old school chat bots are categorically different than modern LLMs, but sure

  • Flying Squid
    link
    fedilink
    English
    3911 months ago

    Google reached a point where “not being evil” was incompatible with its business goals.

    You can’t fault it for a lack of honesty. Google is evil because it’s good business.

    • @galloog1@lemmy.world
      link
      fedilink
      English
      111 months ago

      We can help derp about capitalism all we want but this wouldn’t change in a government run program. An organization is only as ethical as the people that make it up. The military question was an inflection point where the organization was really forced to deal with the question of how to define evil.

      Suddenly every person in that organization was forced to answer some questions. Is the existence of a military evil? Is it evil if I don’t directly support those solutions? What if something I build is used to develop it indirectly? Even if it is not, am I now complicit?

      Now, I’m a Soldier so I have a massive bias here. I personally cannot see why anyone would intentionally want to contribute to us getting killed or losing a war. Tech products are already used in the process. Toyota is not complicit in terrorism just because their trucks are the trucks of choice for insurgent logistics. That being said, if they started accepting contracts with them, there would be an issue.

      A lot of it comes down to the thoughts on the war on terror at the time. The funny thing is that the solutions that they built are focused on Eastern Europe right now in a conflict that most people support and were not completed in time to do counter insurgency.

      The funny thing about the COIN fight is that information products simply made things more accurate with better intelligence. It meant less terrorism due to less insurgents and less civilian casualties resulting in blowback. If poorer information resulted in higher civilian casualties, are the pacifists complicit in that?

      Again, I’m biased so my perspective is one of this issue being a detractor to doing my job better. In the end, defining evil is not black and white, even if you could theoretically come to a specific answer for a specific circumstance with the magical power of all the knowledge in the world. It broke the culture of the company.

  • Very_Bad_Janet
    link
    fedilink
    33
    edit-2
    11 months ago

    I asked Bard the same “don’t be evil” prompt and this is the reply (they have an option to shorten the response but I didn’t use it):

    -Google’s “Don’t be evil” motto was removed from the company’s code of conduct in April 2018. There are a few reasons why this may have happened.
    -The meaning of “evil” became too ambiguous. The phrase “Don’t be evil” is open to interpretation, and what one person might consider evil, another person might not. This ambiguity could have led to confusion and disagreement within the company about what the motto meant and how it should be applied.
    -Google’s business grew and its operations became more complex. As Google grew, its business became more complex and its operations had a wider impact on the world. This made it more difficult for the company to adhere to a simple motto like “Don’t be evil.”
    -Google faced criticism for its business practices. In the years leading up to 2018, Google faced criticism for a number of its business practices, including its handling of user data, its involvement in the Chinese government’s censorship program, and its work on developing autonomous weapons. This criticism may have led the company to believe that the “Don’t be evil” motto was no longer an accurate reflection of its values.

    It is important to note that Google has not completely abandoned the “Don’t be evil” motto. The phrase is still included in the company’s code of conduct, and it is still used by some Google employees and executives to describe the company’s values. However, the removal of the motto from the preface of the code of conduct suggests that Google no longer sees it as the defining principle of its business.

    In its place, Google has adopted a new motto: “Do the right thing.” This motto is more specific and less ambiguous than “Don’t be evil,” and it is intended to provide guidance to Google employees as they make decisions about the company’s business practices.

    • @WorldlyIntrospection@lemmy.world
      link
      fedilink
      English
      27
      edit-2
      11 months ago

      Funnily enough, I find “Do the right thing” to be even more ambiguous!
      Do the right thing by whom? Their shareholders? Well now we’re just throwing the problem over the proverbial fence, now aren’t we?
      “We’re not evil! We’re doing the right things according to our shareholders!.. who just so happen to have evil intentions”

    • @Kelly@lemmy.world
      link
      fedilink
      English
      511 months ago

      In its place, Google has adopted a new motto: “Do the right thing.” This motto is more specific and less ambiguous than “Don’t be evil,”

      As long as we are interpreting “evil” as an adjective describing ones actions these seem pretty similar.

      • @Ultraviolet@lemmy.world
        link
        fedilink
        English
        8
        edit-2
        11 months ago

        The convenient ambiguity of the word “right” to refer to both morality and cold pragmatism is definitely part of the decision. Something can be “right” as in strategically optimal but evil.

  • @kvothelu@lemmy.world
    link
    fedilink
    English
    30
    edit-2
    11 months ago

    it’s wrong answer actually based on user comments it scraped since it was a trending news when they changed code of conduct. news were actually fake and about 99 percent internet users are it up. in reality they didn’t remove the “don’t be evil”. they moved it from top of the code of conduct to end of the code of conduct.

    • pewter
      link
      fedilink
      English
      1111 months ago

      I’m wondering if there’s going to be a real problem when content gets dominated by AI and AI starts scraping their own hallucinations.

      • lemmyvore
        link
        fedilink
        English
        611 months ago

        There’s really no substitute for expert knowledge for content. The LLMs are simply going to speed up the negative feedback loop, exactly as you suspect. GIGO at its finest.

        In a way it might turn out to be a good thing for the internet because it will force us all to reevaluate the rest we produce and consume online content.

    • @Corgisocks@programming.dev
      link
      fedilink
      English
      911 months ago

      It wasn’t moved, the opening and closing sentences had don’t be evil, they removed the preface paragraph at the top, but left they closing sentence.

      “Preface Don’t be evil.” Googlers generally apply those words to how we serve our users. But “Don’t be evil” is much more than that. Yes, it’s about providing our users unbiased access to information, focusing on their needs and giving them the best products and services that we can. But it’s also about doing the right thing more generally – following the law, acting honorably, and treating co-workers with courtesy and respect. The Google Code of Conduct is one of the ways we put “Don’t be evil” into practice.”

      Closing sentence:

      And remember… don’t be evil, and if you see something that you think isn’t right – speak up!

      The closing sentence that remains doesn’t carry much weight without the preface.

  • BeautifulMind ♾️
    link
    fedilink
    English
    2911 months ago

    It has about the same tone as a typical autistic tech worker with an overdeveloped sense of justice and a loose sense for when it’s impolitic to drop truth bombs

    (for context, I am an autistic dev that’s worked for some big corporations in my career)

  • @postmateDumbass@lemmy.world
    link
    fedilink
    English
    1711 months ago

    #1 what about dont be evil led workers to organize? Or did they just do their jobs at random previously?

    #2 honesty of a rarified level

    #3 worded hillariously

    #4 explain.

    #5 is a self defeating assertation.

    • @TORFdot0@lemmy.world
      link
      fedilink
      English
      1311 months ago

      WRT #1 It’s sad that pro-evil workers have to form a union just to be recognized in the tech industry these days

    • @AbidanYre@lemmy.world
      link
      fedilink
      English
      411 months ago

      #4 you could also point out that a dozen out of how many thousands of employees is close enough to zero that it’s not really worth mentioning.

      • Alto
        link
        fedilink
        211 months ago

        Really depends on who those dozen people are, but my very vague recollection of the incident is that it wasn’t anyone important.