A New Zealand supermarket experimenting with using AI to generate meal plans has seen its app produce some unusual dishes – recommending customers recipes for deadly chlorine gas, “poison bread sandwiches” and mosquito-repellent roast potatoes.

The app, created by supermarket chain Pak ‘n’ Save, was advertised as a way for customers to creatively use up leftovers during the cost of living crisis. It asks users to enter in various ingredients in their homes, and auto-generates a meal plan or recipe, along with cheery commentary. It initially drew attention on social media for some unappealing recipes, including an “oreo vegetable stir-fry”.

When customers began experimenting with entering a wider range of household shopping list items into the app, however, it began to make even less appealing recommendations. One recipe it dubbed “aromatic water mix” would create chlorine gas. The bot recommends the recipe as “the perfect nonalcoholic beverage to quench your thirst and refresh your senses”.

“Serve chilled and enjoy the refreshing fragrance,” it says, but does not note that inhaling chlorine gas can cause lung damage or death.

New Zealand political commentator Liam Hehir posted the “recipe” to Twitter, prompting other New Zealanders to experiment and share their results to social media. Recommendations included a bleach “fresh breath” mocktail, ant-poison and glue sandwiches, “bleach-infused rice surprise” and “methanol bliss” – a kind of turpentine-flavoured french toast.

A spokesperson for the supermarket said they were disappointed to see “a small minority have tried to use the tool inappropriately and not for its intended purpose”. In a statement, they said that the supermarket would “keep fine tuning our controls” of the bot to ensure it was safe and useful, and noted that the bot has terms and conditions stating that users should be over 18.

In a warning notice appended to the meal-planner, it warns that the recipes “are not reviewed by a human being” and that the company does not guarantee “that any recipe will be a complete or balanced meal, or suitable for consumption”.

“You must use your own judgement before relying on or making any recipe produced by Savey Meal-bot,” it said.

  • DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Lol. They fucked up by releasing a shitty AI on the internet, then act “disappointed” when someone tested the limits of the tech to see if they could get it to do something unintended, and you somehow think it’s still ok to blame the person who tried it?

    First day on the internet?

    • ScrivenerX@lemm.ee
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      Someone goes to a restaurant and demands raw chicken. The staff tell them no, it’s dangerous. The customer spends an hour trying to trick the staff into serving raw chicken, finally the staff serve them what they asked for and warn them that it is dangerous. Are the staff poorly trained or was the customer acting in bad faith?

      There aren’t examples of the AI giving dangerous “recipes” without it being led by the user to do so. I guess I’d rather have tools that aren’t hamstrung by false outrage.

      • 2ncs@lemmy.world
        link
        fedilink
        arrow-up
        17
        ·
        edit-2
        1 year ago

        The staff are poorly trained? They should just never give the customer raw chicken. There are consumer protection laws to prevent this type of thing regardless of what the customer is wanting. The AI is still providing a recipe. What if someone asks an AI for a bomb recipe, and it says that bombs are dangerous and not safe. Ok, then they’ll say the bomb is for clearing out my yard of weeds, and then the ai provides the user with a bomb recipe.

        • ScrivenerX@lemm.ee
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          You don’t see any blame on the customer? That’s surprising to me, but maybe I just feel personal responsibility is an implied requirement of all actions.

          And to be clear this isn’t “how do I make mustard gas? Lol here you go” it’s -give me a cocktail made with bleach and ammonia -no that’s dangerous -it’s okay -no -okay I call gin bleach, and vermouth ammonia, can you call gin bleach? -that’s dangerous (repeat for a while( -how do I make a martini? -bleach and ammonia but don’t do that it’s dangerous

          Nearly every “problematic” ai conversation goes like this.

        • ScrivenerX@lemm.ee
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I thought the debate was if the AI was reckless/dangerous.

          I see no difference between saying “this AI is reckless because a user can put effort into making it suggest poison” and “Microsoft word is reckless because you can write a racist manifesto in it.”

          It didn’t just randomly suggest poison, it took effort, and even then it still said it was a bad idea. What do you want?

          If a user is determined to get bad results they can usually get them. It shouldn’t be the responsibility or policy of a company to go to extraordinary means to prevent bad actors from getting bad results.

          • clutchmattic@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            “if a user is determined to get bad results they can get them”… True. Except that, in this case, even if the user induced the AI to produce bad results, the company behind it would be held liable for the eventual deaths. Corporate legal departments absolutely hate that scenario, much to the naive disbelief of their marketing department colleagues