• NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    114
    ·
    7 months ago

    AI is going to destroy art the same way Photoshop, or photography, or pre-made tubes of paints, destroyed art. It’s a tool, it helps people take the idea in their head and put it in the world. And it lowers the barrier to entry, now you don’t need years of practice in drawing technique to bring your ideas to life, you just need ideas.

    If AI gets to a point that it can give us creative, original, art that sparks emotion in novel ways…well we probably also made a super intelligent AI and our list of problems is much different than today.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      50
      ·
      7 months ago

      As someone who’s absolutely terrible at drawing, but enjoys photography and generally creativity, having AI tools to generate my own art is opening up a whole different avenue for me to scratch my creative itch.
      I’ve got a technical background, so figuring out the tools and modifying them for my purposes has been a lot more fun than practice drawing.

      • Potatos_are_not_friends@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        7 months ago

        This is the perfect use case.

        Photoshop didn’t destroy jobs forever, all it did was shift how people worked AND actually created work and different types of work.

      • DumbAceDragon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        14
        ·
        7 months ago

        I’ve only dabbled a bit with ML art, and I am by no means an artist, but it doesn’t scratch that itch for me the same way that drawing or doing stuff in blender does. It doesn’t really feel like I’m watching my vision slowly take shape, no matter how precise I make the prompt. It kinda just feels like what it is, a transformer iterating over some random noise.

        I’m also a very technical person, and for years I was stuck in that same mindset of “I’m a technical guy, I’m not cut out for art”. I was only able to get out of this slump thanks to some of my art friends, who were really helpful in pointing me in the right direction.

        Learning to draw isn’t the easiest thing in the world, and trust me I’m probably as bad at it as you are, but it’s fun, and it feels satisfying.

        I agree that AI has a place as another artistic medium, but I also feel like it can become a trap for people like me who think they don’t have an artistic bone in their body.

        If you do feel like getting back into drawing, then as a fellow technical person I’d recommend learning blender first. It taught me some of the skills I also use in drawing, like perspective, shading, and splitting complex objects into simpler shapes. It’s also just plain fun.

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 months ago

          I think the way I use AI is fundamentally different from how most people draw. For me it’s much more like I’m exploring what’s possible, while making creative decisions on the direction to explore. I don’t start with anything in particular in mind. In a lot of ways it helps with the choice paralysis I get when faced with completely open-ended things like art.

    • braxy29@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      7 months ago

      i like the idea of AI as a tool artists can use, but that’s not a capitalist’s viewpoint, unfortunately. they will try to replace people.

    • bugs@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      7 months ago

      I hate this sentiment. It’s not a tool like a brush is to a canvas. It’s a machine that runs off the fuel of our creative achievements. The sheer amount of pro AI shit I read from this place just makes me that closer to putting a bullet in my fucking skull

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      And if text-based images remain uninspired and samey… oh well? Congratulations, you will foreverafter be able to spot when someone’s extremely timely gag image was cranked out via its description, rather than badly composited from Google Images results. I’ve done a lot of bad compositing for Something Awful shitpost threads and speed beats effort every time.

    • StaticFalconar@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      This. AI was never made for the sole purpose of creating art or beating humans in chess. Doing so are just side quests for the real stuff.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Some people also doesn’t care if there is a Rembrandt or a Picasso or an AI but like to dabble in the arts anyways because it’s something they like to do.

      It’s fulfilling (I do love Renoir though).

    • VelvetStorm@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Tbh I hate Photoshop for a lot of photography. It is unfortunately necessary for macro photography, which is the only type I do. Which is one of the reasons mine is not nearly as good as it could be because I refuse to use it.

  • Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    75
    ·
    7 months ago

    Tech bros are not really techie themselves as they are really just Wall Street bros with tech as their product. Most claim they can code, but if they were coders they would be coding. They are not coders, they are businessmen through and through.who just happen to sell tech.

    • evranch@lemmy.ca
      link
      fedilink
      English
      arrow-up
      25
      ·
      7 months ago

      Most claim they can code, but if they were coders they would be coding

      I dislike techbros as much as you, but this isn’t really a valid statement.

      I can code, but I can’t sell a crypto scam to millions of rubes.

      If I could, why would I waste my time writing code?

      Many techbros are likely “good enough” coders who have better marketing skills and used their tech knowledge to leverage into business instead.

      • Immersive_Matthew@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        18
        ·
        7 months ago

        That is the thing though. The real talented tech people tend to be more in the weeds of the tech and get great enjoyment from that. The “tech bros” are more into groups, people, social structures, manipulation, controlling and such and would go crossed eyed if they really had to code something complex as they could never sit that long and concentrate. These are not these same people. Tech bros want you to think they are tech gurus as that is their brand, but it is a lie.

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      99% of people in tech leadership are just regurgitating marketing jargon with minimal understanding of the underlying tech.

  • Honytawk@lemmy.zip
    link
    fedilink
    English
    arrow-up
    48
    ·
    7 months ago

    There are plenty of things you can shit on AI art for

    But it is neither badly approximately, nor can a student produce such work in less than a minute.

    This feels like the other end of the extreme of the tech bros

    • Shampoo_Bottle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      To me, this feels similar to when photography became a thing.

      Realism paintings took a dive. Did photos capture realism? Yes. Did it take the same amount of time and training? Hell no.

      I think it will come down to what the specific consumer wants. If you want fast, you use AI. If you want the human-made aspect, you go with a manual artist. Do you prefer fast turnover, or do you prefer sentiment and effort? Do you prefer pieces from people who master their craft, or from AI?

      I’m not even sorry about this. They are not the exact same, and I’m sick of people saying that AI are and handcrafted art are the exact same. Even if you argue that it takes time to finesse prompts, I can practically promise you that the amount of time between being able to create the two art methods will be drastic. Both may have their place, but they will never be the exact same.

      It’s the difference between a hand-knitted sweater from someone who had done it their entire life to a sweater from Walmart. It’s a hand crafted table from an expert vs something you get from ikea.

      Yes, both fill the boxes, but they are still not the exact same product. They each have their place.

      On the other hand, I won’t commend the hours required to master the method as if they’re the same. AI also usually doesn’t have to factor in materials, training, hourly rate, etc.

  • EnderMB@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    ·
    7 months ago

    I work in AI. LLM’s are cool and all, but I think it’s all mostly hype at this stage. While some jobs will be lost (voice work, content creation) my true belief is that we’ll see two increases:

    1. The release of productivity tools that use LLM’s to help automate or guide menial tasks.

    2. The failure of businesses that try to replicate skilled labour using AI.

    In order to stop point two, I would love to see people and lawmakers really crack down on AI replacing jobs, and regulating the process of replacing job roles with AI until they can sufficiently replace a person. If, for example, someone cracks self-driving vehicles then it should be the responsibility of owning companies and the government to provide training and compensation to allow everyone being “replaced” to find new work. This isn’t just to stop people from suffering, but to stop the idiot companies that’ll sack their entire HR department, automate it via AI, and then get sued into oblivion because it discriminated against someone.

    • Donkter@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      I’ve also heard it’s true that as far as we can figure, we’ve basically reached the limit on certain aspects of LLMs already. Basically, LLMs need a FUCK ton of data to be good. And we’ve already pumped them full of the entire internet so all we can do now is marginally improve these algorithms that we barely understand how they work. Think about that, the entire Internet isnt enough to successfully train LLMs.

      LLMs have taken some jobs already (like audio transcription, basic copyediting, and aspects of programming), we’re just waiting for the industries to catch up. But we’ll need to wait for a paradigm shift before they start producing pictures and books or doing complex technical jobs with few enough hallucinations that we can successfully replace people.

      • prime_number_314159@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 months ago

        The (really, really, really) big problem with the internet is that so much of it is garbage data. The number of false and misleading claims spread endlessly on the internet is huge. To rule those beliefs out of the data set, you need something that can grasp the nuances of published, peer-reviewed data that is deliberately misleading propaganda, and fringe conspiracy nuts that believe the Earth is controlled by lizards with planes, and only a spritz bottle full of vinegar can defeat them, and everything in between.

        There is no person, book, journal, website, newspaper, university, or government that has reliably produced good, consistent help on questions of science, religion, popular lies, unpopular truths, programming, human behavior, economic models, and many, many other things that continuously have an influence on our understanding of the world.

        We can’t build an LLM that won’t consistently be wrong until we can stop being consistently wrong.

        • Donkter@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 months ago

          Yeah I’ve heard medical LLMs are promising when they’ve been trained exclusively on medical texts. Same with the ai that’s been trained exclusively on DNA etc.

      • EnderMB@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 months ago

        My own personal belief is very close to what you’ve said. It’s a technology that isn’t new, but had been assumed to not be as good as compositional models because it would cost a fuck-ton to build and would result in dangerous hallucinations. It turns out that both are still true, but people don’t particularly care. I also believe that one of the reasons why ChatGPT has performed so well compared to other LLM initiatives is because there is a huge amount of stolen data that would get OpenAI in a LOT of trouble.

        IMO, the real breakthroughs will be in academia. Now that LLM’s are popular again, we’ll see more research into how they can be better utilised.

        • Donkter@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          Afaik open ai got their training data from basically a free resource that they just had to request access to. They didn’t think much about it along with everyone else. No one could have predicted that it would be that valuable until after the fact where in retrospect it seems obvious.

    • funkless_eck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 months ago

      I sincerely doubt AI voice over will out perform human actors in the next 100 years in any metric, including cost or time savings.

      • EnderMB@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Not sure why you’re downvoted, but this is already happening. There was a story a few days ago of a long-time BBC voice-over artist that lost their gig. There have also been several stories of VA workers being handed contracts that allow the reuse of their voice for AI purposes.

        • funkless_eck@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          7 months ago

          The artist you’re referring to is Sara Poyzer - https://m.imdb.com/name/nm1528342/ - she was replaced in one specific way:

          The BBC is making a documentary about someone (as yet unknown), who is dying and has lost the ability to speak. Poyzer was on pencil (like standby, hold the date - but not confirmed).to narrate the dying person’s words. Instead they contracted an AI agency to use AI to mimic the dying persons voice (from when they could still speak).

          It would likely be cheaper and easier to hire an impressionist, or Ms Poyzer herself but I assume they are doing it for the “novelty” value, and with the blessing of the terminally ill person.

          For that reason I think my point still stands, they have made the work harder and more expensive, and created a negative PR storm - all problems created by AI and not solved by.

          You are incorrect that AI voice contracts are common place, as SAG negotiated that use of AI voice tools is to be compensated as if the actor recorded the lines themselves - which most actors do from home nowadays, so again it’s at best the same cost for an inferior product - but actually more expensive because you were paying just the actor, but now you’re paying the actor AND the AI techs.

          edit: and not just that, AI voice products are bad. Yes, you can maybe fudge the uncanny Valley a bit by sculpting the prompts and the script to edge towards short sentences, delivered in a monotone, narrating an emotionless description without caring about stress patterns or emphasis, meter, inflection or caesura, and without any breathing sounds (sometimes a positive sometimes a negative) - but that’s all in an actors wheelhouse for free.

    • Ð Greıt Þu̇mpkin@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 months ago

      Nah fuck HR, they’re the shield of the companies to discriminate withing margins from behind

      I think the proper route is a labor replacement tax to fund retraining and replacement pensions

    • Sotuanduso@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      Are you saying that if a company adopts AI to replace a job, they should have to help the replaced workers find new work? Sounds like something one can loophole by cutting the department for totally unrelated reasons before coincidentally realizing that they can have AI do that work, which they totally didn’t think of before firing people.

  • rustyfish@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    7 months ago

    I think approximation is the right word here. It’s pretty cool and all and I’m looking forward how it will develop. But it’s mostly a fun toy.

    I’m stoked for the moment the tech bros understand, that an AI is way better at doing their job than it is at creating art.

    • Vilian@lemmy.ca
      link
      fedilink
      English
      arrow-up
      19
      ·
      7 months ago

      tech bros jobs is to wrote bad javascript and fall for scam, this AI already beaten

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      12
      ·
      7 months ago

      So you’re happy to see AI take someone else’s job as long as it isn’t taking your job.

      • samus12345@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        7 months ago

        Taking the jobs of the people responsible for creating it seems preferable to taking others’ jobs.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          7 months ago

          You’d rather cheer for people to lose their jobs without anyone calling you out on it, sure.

                • areyouevenreal@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  He’s saying the same thing because he’s not actually getting a proper response. The other guy just keeps saying shit like “That’s very reddit of you” or some shit after possibly threatening his job.

            • areyouevenreal@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              You said tech bros will realize it’s easier to replace their jobs than those of creatives. Who is included in “tech bros” here? I wanted a job in tech and can’t get one partly because of AI. Am I a tech bro? I would be very careful what you imply here.

                • areyouevenreal@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  I am insufferable for wanting a job? I am not the one inventing these AIs. Nor am I the one firing people because they exist.

                  When people talk about “tech bros” without clarifying who they mean I can only imagine they are including people like me.

      • mindbleach@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 months ago

        Less work being done by anyone is better. Thinking it’s bad that work is done for us by robots is the brain worms talking.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          8
          ·
          7 months ago

          Indeed. Ideally AI would do every job, so that humans can focus on just doing what we want to do. It’d be like the whole species getting to retire.

          • mindbleach@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            You’d rather cheer for people to lose their jobs without anyone calling you out on it, sure.

            I’m not the angry one wishing unemployment on my “enemies” here.

            Who are you?

            What do you want?

    • IrateAnteater@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 months ago

      I think one thing you and many other people misunderstand is that the image generation aspect of AI is a sideshow, both in use and in intent.

      The ability to generate images from text based prompts is basically a side effect of the ability that they are actually spending billions on, which is object detection.

  • crawancon@lemm.ee
    link
    fedilink
    English
    arrow-up
    31
    ·
    edit-2
    7 months ago

    they’re misunderstanding the reasoning for spending billions.

    the reason to spend all the money to approximate is so we can remove arts and humanities majors altogether… after enough approximation yield similar results to present day chess programs which regularly now beat humans and grand masters. their vocation is doomed to the niche, like most of humanity, eventually.

  • Ð Greıt Þu̇mpkin@lemm.ee
    link
    fedilink
    English
    arrow-up
    28
    ·
    7 months ago

    I just love the idjits who think not showing empathy to people AI bros are trying to put out of work will save them when the algorithms come for their jobs next

    When LeopardsEatingFaces becomes your economic philosophy

    • Cows Look Like Maps@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 months ago

      In fact, there’s infinite problems that cannot be solved by Turing machnes!

      (There are countably many Turing-computable problems and uncountably many non-Turing-computable problems)

      • MBM@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Infinite seems like it’s low-balling it, then. 0% of problems can be solved by Turing machines (same way 0% of real numbers are integers)

        • Cows Look Like Maps@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          Infinite seems like it’s low-balling it

          Infinite by definition cannot be “low-balling”.

          0% of problems can be solved by Turing machines (same way 0% of real numbers are integers)

          This is incorrect. Any computable problem can be solved by a Turing machine. You can look at the Church-Turing thesis if you want to learn more.

          • MBM@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Infinite by definition cannot be “low-balling”.

            I was being cheeky! It could’ve been that the set of non-Turing-computible problems had measure zero but still infinite cardinality. However there’s the much stronger result that the set of Turing-computible problems actually has measure zero (for which I used 0% and the integer:reals thing as shorthands because I didn’t want to talk measure theory on Lemmy). This is so weird, I never got downvoted for this stuff on Reddit.

        • DaleGribble88@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          The subset of integers in the set of reals is non-zero. Sure, I guess you could represent it as arbitrarily small small as a ratio, but it has zero as an asymptote, not as an equivalent value.

          • MBM@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            The cardinality is obviously non-zero but it has measure zero. Probability is about measures.

    • vzq
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 months ago

      deleted by creator

  • Bilb!@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    7 months ago

    Matthew Dow Smith, whomever the fuck that is, has a sophisticated delusion about what’s actually going on and he’s incorporated it into his persecution complex. Not impressed.

  • thedeadwalking4242@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 months ago

    Honestly people are trying to desperately to automate physical labor to. The problem is the machines don’t understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can. The art and humanities is more a side project

    • istanbullu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      Nothing wrong in automating tasks that previously needed human labour. I would much rather sit back and chill, and let automation do my bidding

    • AVincentInSpace@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 months ago

      The problem is the machines don’t understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can.

      I am deeply confused by this statement.

      A robot that assembles cars does not need to “understand” anything about what it’s doing. It just needs to make the same motions with its welding torch over and over again for eternity. And it does that job pretty well.

      Further, neural networks as they stand cannot truly understand anything. All classification networks know how to do is point at stuff and say “That’s a car/traffic light/cancer cell”, and all generation networks know how to do is parrot. Any halfway decent teacher will tell you that memorizing and understanding are completely different things.

      • thedeadwalking4242@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        No but a robot that does the dishes needs to know how to know what a dish is and how to clean all different types and what’s not a dish. The complexity of behavior needed to automate human tasks that cannot be done by a assembly line robot is immense. Most manual labor jobs are still manual labor because they are too full of unknowns and nuances for a simple logic diagram to be of any use. So yes some robots need to understand what’s going on

        And as for parroting vs remembering current LLMs are very limited in the capacity of creating new things but they can create novel things bash smashing together their training data. Think about it, that’s all humans are too. A result of our training data. If I took away every single one of your sense since the day you where born and removed your ability to remember anything you wouldn’t be very intelligent either. With no inputs youcould produce no outputs other than gibberish which an AI can do to. ( And I mean ALL senses you have no form of connection with the outside world )

        • psud@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          My dish washing robot doesn’t need to know anything. It does depend on me loading it, and putting the more heat affected stuff on the top shelf

          • thedeadwalking4242@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Yes it depends on you loading it, doesn’t always get all the dishes done, and will melt your dishes if they are heat sensitive. All this because it doesn’t understand the task at hand. If it did it could, put them away for you, load them, ensure all dishes are spotless, and hand wash heat sensitive dishes.

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        The problem is they didn’t focus research this tech, or try to make image generators specifically, it was an scientific discovery coming from emulating how brains work and then it worked wonders in these fields

  • Tja@programming.dev
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 months ago

    Yeah, no.

    First AI right now can create very decent images in seconds for basically free, and it only will get better.

    Second, AI can do much more than that: translation, Explaining a text in simpler words, help write code, semantic search… Creating poems about armadillos and talking like a pirate are fun novelties, but not the goals.

    • istanbullu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      7 months ago

      What happened to translation in the last 15 years will now happen to creative design.

      • ahornsirup@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        24
        ·
        7 months ago

        So, nothing? Because you still need professional translators for creative works, plenty of writing simply doesn’t directly translate as it relies on culture-specific context that readers in other languages and countries don’t have. So you need someone who is well versed in both cultures to find an appropriate alternative for the translated work.

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      Hahaha look how you get downvoted for stating the obvious. Amazing community here.

  • Rusty Shackleford@programming.dev
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    7 months ago

    I propose that we treat AI as ancillas, companions, muses, or partners in creation and understanding our place in the cosmos.

    While there are pitfalls in treating the current generation of LLMs and GANs as sentient, or any AI for that matter, there will be one day where we must admit that an artificial intelligence is self-aware and sentient, practically speaking.

    To me, the fundamental question about AI, that will reveal much about humanity, is philosophical as much as it is technical: if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?

    • ProgrammingSocks@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 months ago

      It would have natural rights, yes. Watch Star Trek TNG’s “The Measure of a Man” which tackles this issue exactly. Does the AI of current days have intelligence or sentience? I don’t believe so. We’re a FAR cry away from Lt. Cmdr. Data.

      • Rusty Shackleford@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        We’re a FAR cry away from Lt. Cmdr. Data.

        Yes, I agree. I make deep neural network models for a living. The best of the best LLM models still “hallucinate” unreliably after 30-40 queries. My expertise is in computer vision systems; perhaps that’s been mitigated better as of late.

        My point was to emphasize the necessity for us, as a species, to answer the philosophical question and start codifying legal jurisprudence around it well before the moment of self-awareness of a General-Purpose AI.

  • Dasus@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 months ago

    If you think arts and humanities are useless, you probably lack an imagination.

    Like completely.

    I won’t say you’re useless, because simple minded grunts are needed.

    Humanity wouldn’t exist without the arts.

    • vzq
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      2 months ago

      deleted by creator

        • ProgrammingSocks@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          Because most people don’t engage with art critically. See Marvel movies. Maybe others are fine with remixed slop but I am not.

      • UnderwaterSwift@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        It has already happened in our lifetime with medical illustration. This is pre-gpt. It will just now spread. Are generated diagrams worse in subtle ways? Yes, but not enough to matter for the difference in first or ease of use.

        • tacomama@leminal.space
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          Nah. Association of Medical Illustrators

          I have a Masters in Medical Art. (late 1980s) 35 years ago some were saying ‘we won’t need medical artists because of photography’ and then a few years later ‘because of personal computers’. This isn’t the case. But when the general public has access to a tool they love to talk about how a discipline is going away because ‘now anyone can do it’.

  • people_are_cute@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    7 months ago

    AI art tools democratize art by empowering those who weren’t born with the affinity, talent or privilege to become artists themselves. They allow regular people the freedom of expression in new dimensions. They are amazing.

    They are not made to replace human art. They are made to supplement it. The “artists” who feel threatened and offended at its existence are probably not very good at their art.