AI is going to destroy art the same way Photoshop, or photography, or pre-made tubes of paints, destroyed art. It’s a tool, it helps people take the idea in their head and put it in the world. And it lowers the barrier to entry, now you don’t need years of practice in drawing technique to bring your ideas to life, you just need ideas.
If AI gets to a point that it can give us creative, original, art that sparks emotion in novel ways…well we probably also made a super intelligent AI and our list of problems is much different than today.
As someone who’s absolutely terrible at drawing, but enjoys photography and generally creativity, having AI tools to generate my own art is opening up a whole different avenue for me to scratch my creative itch.
I’ve got a technical background, so figuring out the tools and modifying them for my purposes has been a lot more fun than practice drawing.This is the perfect use case.
Photoshop didn’t destroy jobs forever, all it did was shift how people worked AND actually created work and different types of work.
I’ve only dabbled a bit with ML art, and I am by no means an artist, but it doesn’t scratch that itch for me the same way that drawing or doing stuff in blender does. It doesn’t really feel like I’m watching my vision slowly take shape, no matter how precise I make the prompt. It kinda just feels like what it is, a transformer iterating over some random noise.
I’m also a very technical person, and for years I was stuck in that same mindset of “I’m a technical guy, I’m not cut out for art”. I was only able to get out of this slump thanks to some of my art friends, who were really helpful in pointing me in the right direction.
Learning to draw isn’t the easiest thing in the world, and trust me I’m probably as bad at it as you are, but it’s fun, and it feels satisfying.
I agree that AI has a place as another artistic medium, but I also feel like it can become a trap for people like me who think they don’t have an artistic bone in their body.
If you do feel like getting back into drawing, then as a fellow technical person I’d recommend learning blender first. It taught me some of the skills I also use in drawing, like perspective, shading, and splitting complex objects into simpler shapes. It’s also just plain fun.
I think the way I use AI is fundamentally different from how most people draw. For me it’s much more like I’m exploring what’s possible, while making creative decisions on the direction to explore. I don’t start with anything in particular in mind. In a lot of ways it helps with the choice paralysis I get when faced with completely open-ended things like art.
i like the idea of AI as a tool artists can use, but that’s not a capitalist’s viewpoint, unfortunately. they will try to replace people.
I hate this sentiment. It’s not a tool like a brush is to a canvas. It’s a machine that runs off the fuel of our creative achievements. The sheer amount of pro AI shit I read from this place just makes me that closer to putting a bullet in my fucking skull
Once you reincarnate in the future, generative models will make even better art than they do today. It’ll be a losing battle against time.
Shill
Luddite
Removed by mod
Downvoted for truth. Too bad for them this isn’t reddit
And if text-based images remain uninspired and samey… oh well? Congratulations, you will foreverafter be able to spot when someone’s extremely timely gag image was cranked out via its description, rather than badly composited from Google Images results. I’ve done a lot of bad compositing for Something Awful shitpost threads and speed beats effort every time.
This. AI was never made for the sole purpose of creating art or beating humans in chess. Doing so are just side quests for the real stuff.
What do you think the “real stuff” is?
Some people also doesn’t care if there is a Rembrandt or a Picasso or an AI but like to dabble in the arts anyways because it’s something they like to do.
It’s fulfilling (I do love Renoir though).
Tbh I hate Photoshop for a lot of photography. It is unfortunately necessary for macro photography, which is the only type I do. Which is one of the reasons mine is not nearly as good as it could be because I refuse to use it.
Tech bros are not really techie themselves as they are really just Wall Street bros with tech as their product. Most claim they can code, but if they were coders they would be coding. They are not coders, they are businessmen through and through.who just happen to sell tech.
This is 100% correct. It can overlap but honestly as someone going into embedded systems I despise tech bros.
Most claim they can code, but if they were coders they would be coding
I dislike techbros as much as you, but this isn’t really a valid statement.
I can code, but I can’t sell a crypto scam to millions of rubes.
If I could, why would I waste my time writing code?
Many techbros are likely “good enough” coders who have better marketing skills and used their tech knowledge to leverage into business instead.
That is the thing though. The real talented tech people tend to be more in the weeds of the tech and get great enjoyment from that. The “tech bros” are more into groups, people, social structures, manipulation, controlling and such and would go crossed eyed if they really had to code something complex as they could never sit that long and concentrate. These are not these same people. Tech bros want you to think they are tech gurus as that is their brand, but it is a lie.
99% of people in tech leadership are just regurgitating marketing jargon with minimal understanding of the underlying tech.
There are plenty of things you can shit on AI art for
But it is neither badly approximately, nor can a student produce such work in less than a minute.
This feels like the other end of the extreme of the tech bros
To me, this feels similar to when photography became a thing.
Realism paintings took a dive. Did photos capture realism? Yes. Did it take the same amount of time and training? Hell no.
I think it will come down to what the specific consumer wants. If you want fast, you use AI. If you want the human-made aspect, you go with a manual artist. Do you prefer fast turnover, or do you prefer sentiment and effort? Do you prefer pieces from people who master their craft, or from AI?
I’m not even sorry about this. They are not the exact same, and I’m sick of people saying that AI are and handcrafted art are the exact same. Even if you argue that it takes time to finesse prompts, I can practically promise you that the amount of time between being able to create the two art methods will be drastic. Both may have their place, but they will never be the exact same.
It’s the difference between a hand-knitted sweater from someone who had done it their entire life to a sweater from Walmart. It’s a hand crafted table from an expert vs something you get from ikea.
Yes, both fill the boxes, but they are still not the exact same product. They each have their place.
On the other hand, I won’t commend the hours required to master the method as if they’re the same. AI also usually doesn’t have to factor in materials, training, hourly rate, etc.
deleted by creator
I work in AI. LLM’s are cool and all, but I think it’s all mostly hype at this stage. While some jobs will be lost (voice work, content creation) my true belief is that we’ll see two increases:
-
The release of productivity tools that use LLM’s to help automate or guide menial tasks.
-
The failure of businesses that try to replicate skilled labour using AI.
In order to stop point two, I would love to see people and lawmakers really crack down on AI replacing jobs, and regulating the process of replacing job roles with AI until they can sufficiently replace a person. If, for example, someone cracks self-driving vehicles then it should be the responsibility of owning companies and the government to provide training and compensation to allow everyone being “replaced” to find new work. This isn’t just to stop people from suffering, but to stop the idiot companies that’ll sack their entire HR department, automate it via AI, and then get sued into oblivion because it discriminated against someone.
I’ve also heard it’s true that as far as we can figure, we’ve basically reached the limit on certain aspects of LLMs already. Basically, LLMs need a FUCK ton of data to be good. And we’ve already pumped them full of the entire internet so all we can do now is marginally improve these algorithms that we barely understand how they work. Think about that, the entire Internet isnt enough to successfully train LLMs.
LLMs have taken some jobs already (like audio transcription, basic copyediting, and aspects of programming), we’re just waiting for the industries to catch up. But we’ll need to wait for a paradigm shift before they start producing pictures and books or doing complex technical jobs with few enough hallucinations that we can successfully replace people.
The (really, really, really) big problem with the internet is that so much of it is garbage data. The number of false and misleading claims spread endlessly on the internet is huge. To rule those beliefs out of the data set, you need something that can grasp the nuances of published, peer-reviewed data that is deliberately misleading propaganda, and fringe conspiracy nuts that believe the Earth is controlled by lizards with planes, and only a spritz bottle full of vinegar can defeat them, and everything in between.
There is no person, book, journal, website, newspaper, university, or government that has reliably produced good, consistent help on questions of science, religion, popular lies, unpopular truths, programming, human behavior, economic models, and many, many other things that continuously have an influence on our understanding of the world.
We can’t build an LLM that won’t consistently be wrong until we can stop being consistently wrong.
Yeah I’ve heard medical LLMs are promising when they’ve been trained exclusively on medical texts. Same with the ai that’s been trained exclusively on DNA etc.
My own personal belief is very close to what you’ve said. It’s a technology that isn’t new, but had been assumed to not be as good as compositional models because it would cost a fuck-ton to build and would result in dangerous hallucinations. It turns out that both are still true, but people don’t particularly care. I also believe that one of the reasons why ChatGPT has performed so well compared to other LLM initiatives is because there is a huge amount of stolen data that would get OpenAI in a LOT of trouble.
IMO, the real breakthroughs will be in academia. Now that LLM’s are popular again, we’ll see more research into how they can be better utilised.
Afaik open ai got their training data from basically a free resource that they just had to request access to. They didn’t think much about it along with everyone else. No one could have predicted that it would be that valuable until after the fact where in retrospect it seems obvious.
I sincerely doubt AI voice over will out perform human actors in the next 100 years in any metric, including cost or time savings.
Not sure why you’re downvoted, but this is already happening. There was a story a few days ago of a long-time BBC voice-over artist that lost their gig. There have also been several stories of VA workers being handed contracts that allow the reuse of their voice for AI purposes.
The artist you’re referring to is Sara Poyzer - https://m.imdb.com/name/nm1528342/ - she was replaced in one specific way:
The BBC is making a documentary about someone (as yet unknown), who is dying and has lost the ability to speak. Poyzer was on pencil (like standby, hold the date - but not confirmed).to narrate the dying person’s words. Instead they contracted an AI agency to use AI to mimic the dying persons voice (from when they could still speak).
It would likely be cheaper and easier to hire an impressionist, or Ms Poyzer herself but I assume they are doing it for the “novelty” value, and with the blessing of the terminally ill person.
For that reason I think my point still stands, they have made the work harder and more expensive, and created a negative PR storm - all problems created by AI and not solved by.
You are incorrect that AI voice contracts are common place, as SAG negotiated that use of AI voice tools is to be compensated as if the actor recorded the lines themselves - which most actors do from home nowadays, so again it’s at best the same cost for an inferior product - but actually more expensive because you were paying just the actor, but now you’re paying the actor AND the AI techs.
edit: and not just that, AI voice products are bad. Yes, you can maybe fudge the uncanny Valley a bit by sculpting the prompts and the script to edge towards short sentences, delivered in a monotone, narrating an emotionless description without caring about stress patterns or emphasis, meter, inflection or caesura, and without any breathing sounds (sometimes a positive sometimes a negative) - but that’s all in an actors wheelhouse for free.
Nah fuck HR, they’re the shield of the companies to discriminate withing margins from behind
I think the proper route is a labor replacement tax to fund retraining and replacement pensions
Are you saying that if a company adopts AI to replace a job, they should have to help the replaced workers find new work? Sounds like something one can loophole by cutting the department for totally unrelated reasons before coincidentally realizing that they can have AI do that work, which they totally didn’t think of before firing people.
That’s why it would need regulation to work…
UBI is better and has more momentum with the general public
-
I think approximation is the right word here. It’s pretty cool and all and I’m looking forward how it will develop. But it’s mostly a fun toy.
I’m stoked for the moment the tech bros understand, that an AI is way better at doing their job than it is at creating art.
tech bros jobs is to wrote bad javascript and fall for scam, this AI already beaten
So you’re happy to see AI take someone else’s job as long as it isn’t taking your job.
Taking the jobs of the people responsible for creating it seems preferable to taking others’ jobs.
That comment was very Reddit of you. Don’t do that, please.
You’d rather cheer for people to lose their jobs without anyone calling you out on it, sure.
Keep assuming. Fuel your own rage. I tried. Now I’m out. Good night and goodbye.
I’m not the angry one wishing unemployment on my “enemies” here.
I think they’re using AI to say the same sentence over and over again.
He’s saying the same thing because he’s not actually getting a proper response. The other guy just keeps saying shit like “That’s very reddit of you” or some shit after possibly threatening his job.
You said tech bros will realize it’s easier to replace their jobs than those of creatives. Who is included in “tech bros” here? I wanted a job in tech and can’t get one partly because of AI. Am I a tech bro? I would be very careful what you imply here.
All three of you are insufferable
I am insufferable for wanting a job? I am not the one inventing these AIs. Nor am I the one firing people because they exist.
When people talk about “tech bros” without clarifying who they mean I can only imagine they are including people like me.
Less work being done by anyone is better. Thinking it’s bad that work is done for us by robots is the brain worms talking.
Indeed. Ideally AI would do every job, so that humans can focus on just doing what we want to do. It’d be like the whole species getting to retire.
You’d rather cheer for people to lose their jobs without anyone calling you out on it, sure.
I’m not the angry one wishing unemployment on my “enemies” here.
Who are you?
What do you want?
The ideal endpoint is to eliminate the concept of “jobs” entirely. Why should people have to work?
Okay. So why are you breaking that guy’s balls, over automating away jobs, which you don’t want to exist?
Because currently we do need jobs. Otherwise why is he upset about AI in the first place?
deleted by creator
I think one thing you and many other people misunderstand is that the image generation aspect of AI is a sideshow, both in use and in intent.
The ability to generate images from text based prompts is basically a side effect of the ability that they are actually spending billions on, which is object detection.
It’s bad at anything useful for programming too.
And the things it’s good at have been developed by stealing GPL/copyleft code.
they’re misunderstanding the reasoning for spending billions.
the reason to spend all the money to approximate is so we can remove arts and humanities majors altogether… after enough approximation yield similar results to present day chess programs which regularly now beat humans and grand masters. their vocation is doomed to the niche, like most of humanity, eventually.
deleted by creator
What else can they be seen as other than hobbies or marketing?
deleted by creator
Since you are only getting condescending non-answers I’ll try to answer it for you. It’s expression, a desire to communicate emotions and concepts via a medium other than words.
Unfortunately people all think differently, so the expression only reaches some people. And some people don’t get the expressions at all.
Maybe visit a classical museum once in a while
Removed by mod
It’s not this guy’s fault your vocation is doomed
CONSOOM THE SLOP. I LOVE SLOP SO YOU MUST TOO
I just love the idjits who think not showing empathy to people AI bros are trying to put out of work will save them when the algorithms come for their jobs next
When LeopardsEatingFaces becomes your economic philosophy
Turing Incompleteness is a pathway to many powers the Computer Scientists would consider incalculable.
Is it possible to learn this power?
No, but it’s extremely possible to copy someone else’s work on it from stack overflow!
Not from an algorithm.
In fact, there’s infinite problems that cannot be solved by Turing machnes!
(There are countably many Turing-computable problems and uncountably many non-Turing-computable problems)
Infinite seems like it’s low-balling it, then. 0% of problems can be solved by Turing machines (same way 0% of real numbers are integers)
Infinite seems like it’s low-balling it
Infinite by definition cannot be “low-balling”.
0% of problems can be solved by Turing machines (same way 0% of real numbers are integers)
This is incorrect. Any computable problem can be solved by a Turing machine. You can look at the Church-Turing thesis if you want to learn more.
Infinite by definition cannot be “low-balling”.
I was being cheeky! It could’ve been that the set of non-Turing-computible problems had measure zero but still infinite cardinality. However there’s the much stronger result that the set of Turing-computible problems actually has measure zero (for which I used 0% and the integer:reals thing as shorthands because I didn’t want to talk measure theory on Lemmy). This is so weird, I never got downvoted for this stuff on Reddit.
Oh, sorry about that! Your cheekiness went right over my head. 😋
The subset of integers in the set of reals is non-zero. Sure, I guess you could represent it as arbitrarily small small as a ratio, but it has zero as an asymptote, not as an equivalent value.
The cardinality is obviously non-zero but it has measure zero. Probability is about measures.
deleted by creator
deleted by creator
Matthew Dow Smith, whomever the fuck that is, has a sophisticated delusion about what’s actually going on and he’s incorporated it into his persecution complex. Not impressed.
Honestly people are trying to desperately to automate physical labor to. The problem is the machines don’t understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can. The art and humanities is more a side project
Nothing wrong in automating tasks that previously needed human labour. I would much rather sit back and chill, and let automation do my bidding
If only the people in control of the wealth would let the rest of us chill while the machines do all the labor.
that’s a social problem, not technology’s fault.
The problem is the machines don’t understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can.
I am deeply confused by this statement.
A robot that assembles cars does not need to “understand” anything about what it’s doing. It just needs to make the same motions with its welding torch over and over again for eternity. And it does that job pretty well.
Further, neural networks as they stand cannot truly understand anything. All classification networks know how to do is point at stuff and say “That’s a car/traffic light/cancer cell”, and all generation networks know how to do is parrot. Any halfway decent teacher will tell you that memorizing and understanding are completely different things.
No but a robot that does the dishes needs to know how to know what a dish is and how to clean all different types and what’s not a dish. The complexity of behavior needed to automate human tasks that cannot be done by a assembly line robot is immense. Most manual labor jobs are still manual labor because they are too full of unknowns and nuances for a simple logic diagram to be of any use. So yes some robots need to understand what’s going on
And as for parroting vs remembering current LLMs are very limited in the capacity of creating new things but they can create novel things bash smashing together their training data. Think about it, that’s all humans are too. A result of our training data. If I took away every single one of your sense since the day you where born and removed your ability to remember anything you wouldn’t be very intelligent either. With no inputs youcould produce no outputs other than gibberish which an AI can do to. ( And I mean ALL senses you have no form of connection with the outside world )
My dish washing robot doesn’t need to know anything. It does depend on me loading it, and putting the more heat affected stuff on the top shelf
Yes it depends on you loading it, doesn’t always get all the dishes done, and will melt your dishes if they are heat sensitive. All this because it doesn’t understand the task at hand. If it did it could, put them away for you, load them, ensure all dishes are spotless, and hand wash heat sensitive dishes.
The problem is they didn’t focus research this tech, or try to make image generators specifically, it was an scientific discovery coming from emulating how brains work and then it worked wonders in these fields
Yeah, no.
First AI right now can create very decent images in seconds for basically free, and it only will get better.
Second, AI can do much more than that: translation, Explaining a text in simpler words, help write code, semantic search… Creating poems about armadillos and talking like a pirate are fun novelties, but not the goals.
What happened to translation in the last 15 years will now happen to creative design.
So, nothing? Because you still need professional translators for creative works, plenty of writing simply doesn’t directly translate as it relies on culture-specific context that readers in other languages and countries don’t have. So you need someone who is well versed in both cultures to find an appropriate alternative for the translated work.
Hahaha look how you get downvoted for stating the obvious. Amazing community here.
Well, it’s a classic. Now look at this:
- communism sounds good but it’s impossible in practice
opens umbrella
They make bulletproof umbrellas now?
It’s so impossible in fact that the CIA spends millions destabilizing countries that are even threatening to become socialist.
It’s one of the reasons that make it impossible in practice.
The USSR spent millions the other way tho.
Is that supposed to count as"stating the obvious? O.o
I dont see any rain coming, you can close your umbrella
There are so many factual errors in their comment tho.
Good things you stated some? Why not have an actual discussion instead of a pissing contest?
That stuff is not free, for example. The energy and water usage is simply not economical.
I said basically free. It costs a fraction of a cent to generate an image, that would take a human a few hours at least.
What if I tell you that the effort that gets put nto the image is a vital part of art?
Well, that’s your opinion. I’m more of a finished product kind of guy.
You know you can run a lot of these models on your home PC right
I propose that we treat AI as ancillas, companions, muses, or partners in creation and understanding our place in the cosmos.
While there are pitfalls in treating the current generation of LLMs and GANs as sentient, or any AI for that matter, there will be one day where we must admit that an artificial intelligence is self-aware and sentient, practically speaking.
To me, the fundamental question about AI, that will reveal much about humanity, is philosophical as much as it is technical: if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?
It would have natural rights, yes. Watch Star Trek TNG’s “The Measure of a Man” which tackles this issue exactly. Does the AI of current days have intelligence or sentience? I don’t believe so. We’re a FAR cry away from Lt. Cmdr. Data.
We’re a FAR cry away from Lt. Cmdr. Data.
Yes, I agree. I make deep neural network models for a living. The best of the best LLM models still “hallucinate” unreliably after 30-40 queries. My expertise is in computer vision systems; perhaps that’s been mitigated better as of late.
My point was to emphasize the necessity for us, as a species, to answer the philosophical question and start codifying legal jurisprudence around it well before the moment of self-awareness of a General-Purpose AI.
If you think arts and humanities are useless, you probably lack an imagination.
Like completely.
I won’t say you’re useless, because simple minded grunts are needed.
Humanity wouldn’t exist without the arts.
Most of Arts will be automated away. Stable Diffusion is just the beginning.
deleted by creator
Spoken like someone who doesn’t understand why people engage with art in the first place. AKA a techbro.
People love Stable Diffusion, and use it a lot. Why do you think is that?
Because most people don’t engage with art critically. See Marvel movies. Maybe others are fine with remixed slop but I am not.
It has already happened in our lifetime with medical illustration. This is pre-gpt. It will just now spread. Are generated diagrams worse in subtle ways? Yes, but not enough to matter for the difference in first or ease of use.
Nah. Association of Medical Illustrators
I have a Masters in Medical Art. (late 1980s) 35 years ago some were saying ‘we won’t need medical artists because of photography’ and then a few years later ‘because of personal computers’. This isn’t the case. But when the general public has access to a tool they love to talk about how a discipline is going away because ‘now anyone can do it’.
Yeah sure dude, let me just ask my mom who worked 30 years in the industry doing it.
AI art tools democratize art by empowering those who weren’t born with the affinity, talent or privilege to become artists themselves. They allow regular people the freedom of expression in new dimensions. They are amazing.
They are not made to replace human art. They are made to supplement it. The “artists” who feel threatened and offended at its existence are probably not very good at their art.