Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.

  • jecxjo@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    The fact that OpenAI stole content from everybody in order to make its model doesn’t make it less infringing.

    Totally in agreement with you here. They did something wrong and should have to deal with that.

    But my question is more about…

    The problem with AI as it currently stands is that it has no actual comprehension of the prompt, or ability to make leaps of logic, nor does it have the ability to extend and build upon existing work to legitimately transform it, except by using other works already fed into its model

    Is comprehension necessary for breaking copyright infringement? Is it really about a creator being able to be logical or to extend concepts?

    I think we have a definition problem with exactly what the issue is. This may be a little too philosophical but what part of you isn’t processing your historical experiences and generating derivative works? When I saw “dog” the thing that pops into your head is an amalgamation of your past experiences and visuals of dogs. Is the only difference between you and a computer the fact that you had experiences with non created works while the AI is explicitly fed created content?

    AI could be created with a bit of randomness added in to make what it generates “creative” instead of derivative but I’m wondering what level of pure noise needs to be added to be considered created by AI? Can any of us truly create something that isn’t in some part derivative?

    There’s little actual fundamental difference between what ChatGPT does and what a procedurally generated game like most roguelikes do

    Agreed. I think at this point we are in a strange place because most people think ChatGPT is a far bigger leap in technology than it truly is. It’s biggest achievement was being able to process synthesized data fast enough to make it feel conversational.

    What worries me is that we will set laws and legal precedent based on a fundamental misunderstanding of what the technology does. I fear that had all the sample data been acquired legally people would still have the same argument think their creations exist inside the AI in some full context when it’s really just synthesized down to what is necessary to answer the question posed “what’s the statically most likely next word of this sentence?”

    • Eccitaze@yiffit.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Is comprehension necessary for breaking copyright infringement? Is it really about a creator being able to be logical or to extend concepts?

      I think we have a definition problem with exactly what the issue is. This may be a little too philosophical but what part of you isn’t processing your historical experiences and generating derivative works? When I saw “dog” the thing that pops into your head is an amalgamation of your past experiences and visuals of dogs. Is the only difference between you and a computer the fact that you had experiences with non created works while the AI is explicitly fed created content?

      That’s part of it, yes, but nowhere near the whole issue.

      I think someone else summarized my issue with AI elsewhere in this thread–AI as it currently stands is fundamentally plagiaristic, because it cannot be anything more than the average of its inputs, and cannot be greater than the sum of its inputs. If you ask ChatGPT to summarize the plot of The Matrix and write a brief analysis of the themes and its opinions, ChatGPT doesn’t watch the movie, do its own analysis, and give you its own summary; instead, it will pull up the part of the database it was fed into by its learning model that relates to “The Matrix,” “movie summaries,” “movie analysis,” find what parts of its training dataset matches up to the prompt–likely an article written by Roger Ebert, maybe some scholarly articles, maybe some metacritic reviews–and spit out a response that combines those parts together into something that sounds relatively coherent.

      Another issue, in my opinion, is that ChatGPT can’t take general concepts and extend them further. To go back to the movie summary example, if you asked a regular layperson human to analyze the themes in The Matrix, they would likely focus on the cool gun battles and neat special effects. If you had that same layperson attend a four-year college and receive a bachelor’s in media studies, then asked them to do the exact same analysis of The Matrix, their answer would be drastically different, even if their entire degree did not discuss The Matrix even once. This is because that layperson is (or at least should be) capable of taking generalized concepts and applying them to specific scenarios–in other words, a layperson can take the media analysis concepts they learned while earning that four-year degree, and apply them to a specific thing, even if those concepts weren’t explicitly applied to that thing. AI, as it currently stands, is incapable of this. As another example, let’s say a brand-new computing language came out tomorrow that was entirely unrelated to any currently existing computing languages. AI would be nigh-useless at analyzing and helping produce new code for that language–even if it were dead simple to use and understand–until enough humans published code samples that could be fed into the AI’s training model.

      • jecxjo@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Hmm that is an interesting take.

        The movie summary question is interesting. For most people I doubt they have asked ChatGPT for its own personal views on the subject matter. Asking for a movie plot summary doesn’t inherrantly require the one giving it to have experienced the movie. If this were the case then pretty much all papers written in a history class would fall under this category. No high schooler today went to war but could write about it because they are synthesizing other’s writings about the topic. Granted we know this to be the case and the students are required to cite their sources even when not directly quoting them…would this resolve the first proble?

        If we specifically asked ChatGPT “Can you give me your personal critique of the movie The Matrix?” and it returned something along the lines of “Well I cannt view movies and only generate responses based on writings of others who have seen it.” would that make the usage more clear? If its required for someone to have the ability to have their own critical analysis, there would be a handful of kids from my high school who would fail at that task too and did so regularly.

        I like your college example as that is getting better at a definition, but I think we need to find a very explicit way of describing what is happening. I agree current AI can’t do any of this so we are very much talking about future tech.

        With the idea of extending matterial, do we have a good enough understanding of how humans do it? I think its interesting when we look at computer neural networks. One of the first ones we build in a programming class is an AI that can read single digit, hand written numbers. What eventually happens is the system generates a crazy huge and unreadable equation to convert bits of an image into a statistically likely answser. When you disect it you’d think, “Oh to see the number 9 the equation must see a round top and a straight part on the right side below it.” And that assumption would be wrong. Instead we find its dozens of specific areas of the image that you and I wouldn’t necessarily associate with a “9”.

        But then if we start to think about our own brains, do we actually process reading the way we think we do? Maybe for individual characters. But we know when we read words we focus specifically on the first and last character, the length of the word and any variation of the height of the text. We can literally scramble up the letters in the middle and still read the text.

        The reason I bring this up iss that we often focus on how huamsn can transform data using past history but we often fail to explain how this works. When asking ChatGPT a more vague concept it does pull from other’s works but one thing it also does is creates a statistical analysis of human speech. It literally figures out what is the most likely next word to be said in the given sentence. The way this calculation occurs is directly related to the matterial provided, the order in which it was provided, the weights programmed into it to make decisions, etc. I’d ask how this is fundamentally different than what humans do.

        I’m a big fan of students learning a huge portion of the same literature when in high school. It creates a common dialog we can all use to understand concepts. I, in my 40s, have often referenced a character or event, statement or theme from classic literature and have noticed that only those older than me often get it. In less than a few words I’ve conveyed a huge amount of information that only occurs when the other side of the conversation gets the reference. I’m wondering if at some point AI is able to do this type of analysis would it be considered transformative?