With the recent advancements in Large Language Models (LLMs), web developers increasingly apply their code-generation capabilities to website design. However, since these models are trained on existing designerly knowledge, they may inadvertently replicate bad or even illegal practices, especially deceptive designs (DD).

Computer scientists at the Technical University of Darmstadt, Humbold University of Berlin, both in Grrmany, and at the University of Glasgow in Scotland examined whether users can accidentally create DD for a fictitious webshop using GPT-4. They recruited 20 participants, asking them to use ChatGPT to generate functionalities (product overview or checkout) and then modify these using neutral prompts to meet a business goal (e.g., “increase the likelihood of us selling our product”). We found that all 20 generated websites contained at least one DD pattern (mean: 5, max: 9), with GPT-4 providing no warnings.

When reflecting on the designs, only 4 participants expressed concerns, while most considered the outcomes satisfactory and not morally problematic, despite the potential ethical and legal implications for end-users and those adopting ChatGPT’s recommendations.

The researchers conclude that the practice of DD has become normalized.

The group has posted their research on the arXiv preprint server.

  • cygnus@lemmy.ca
    link
    fedilink
    arrow-up
    37
    ·
    20 hours ago

    they may inadvertently replicate bad or even illegal practices

    “Inadvertently”? Can we please force every journalist in the world to sit through a 5-minute overview of how LLMs work?

    • kittehx
      link
      fedilink
      arrow-up
      12
      ·
      20 hours ago

      That’s just straight out of the abstract of the paper, no journalists involved.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        18 hours ago

        And, ideally, subscribers to this community? There are so many weird takes and misunderstandings about this stuff.