• Superb
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    Well those things aren’t generative AI so there isn’t much of an issue with them

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      What about ‘edge enhancing’ NNs like NNEDI3? Or GANs that absolutely ‘paint in’ inferred details from their training? How big is the model before it becomes ‘generative?’

      What about a deinterlacer network that’s been trained on other interlaced footage?

      My point is there is an infinitely fine gradient through time between good old MS paint/bilinear upscaling and ChatGPT (or locally runnable txt2img diffusion models). Even now, there’s an array of modern ML-based ‘editors’ that are questionably generative most probably don’t know are working in the background.

      • Superb
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Id say if there is training beforehand, then its “generative AI”