Taylor & Francis and Wiley sold out their researchers in bulk, this should be a crime.
Researchers need to be able to consent or refuse to consent and science need to be respected more than that.
Taylor & Francis and Wiley sold out their researchers in bulk, this should be a crime.
Researchers need to be able to consent or refuse to consent and science need to be respected more than that.
Meh who cares. AI is gonna be more correct now. It costs nothing to use (if you run your own locally), and nothing to not use. Just don’t use it if you hate it so much and for the love of god touch grass and get off twitter, that place is hell on earth.
Despite the downvotes I’m interested why you think this way…
The common Lemmy view is that morally, papers are meant to contribute to the sum of human knowledge as a whole, and therefore (1) shouldn’t be paywalled in a way unfair to authors and reviewers – they pay the journals, not the other way around – and (2) closed-source artificially intelligent word guessers make money off of content that isn’t their own, in ways that said content-makers have little agency or say, without contributing back to the sum of human knowledge by being open-source or transparent (Lemmy has a distaste for the cloisters of venture capital and multibillion-parameter server farms).
So it’s not about using AI or not but about the lack of self-determination and transparency, e.g. an artist getting their style copied because they paid an art gallery to display it, and the art gallery traded rights to image generation companies without the artists’ say (although it can be argued that the artists signed the ToS, though there aren’t any viable alternatives to avoiding the signing).
I’m happy to listen if you differ!
Yes of course. It’s not at all relevant?
Yeah that’s why I’m pro-AI, not only is it very literally transparent and most models open-weight, and most libraries open-source, but it’s making knowledge massively more accessible.
I used to teach people to Google, but there is no point, now it’s like a dark pattern, with very little reward for a lot of effort, because everything, especially YouTube is now a grift. Now I teach them how to proompt without also rotting their brain by outsourcing actual intellectual work rather than pure fact-finding.
Yes it is a bit shit at being correct, it hallucinates, but frankly to paraphrase Turing, infallibility is not a quality of intelligence.
And more practically if Joe Schmoe can’t think critically and has to trust unquestionably then I’d rather he trust gippity than the average Facebook schizo.
With that in mind I see no reason not to feed it products of the scientific method, the most rigorous and highest solution to the problems of epistemology we’ve come up with thus far.
Because frankly if you actually read the terms and conditions when you signed up to Facebook and your weird computer friends were all scoffing at the privacy invasion and if you listened to the experts then you and these artists would not feel like you were being treated unfairly, because not only did you allow it to happen, you all encouraged it. Now that it might actually be used for good, you are upset. It’s disheartening. I’m sorry, most of you signed it all away by 2006. Data is forever.
So if I go to an art gallery for inspiration I must declare this in a contract too? This is absurd. But to be fair I’m not surprised. Intellectual property is altogether an absurd notion in the digital age, and insanity like “copyrighting styles” is just the sharpest most obvious edge of it.
I think also the fearmongering about artists is overplayed by people who are not artists. For all it’s worth I’ve never heard this vehement anti-AI take outside of like Twitter and Reddit comment sections and I know plenty of artists, and those I do actually follow e.g. on YT are actually either skeptical but positive or using it already as part of their workflow even if they do have criticisms of the industry.
(Which I do of course too, in the sense that there should not be any industry for as long as the oppression of capital reigns supreme.)
Actually the only prolific individual of any kind that has voiced this opinion that I’m aware of is Hbomberguy who is self-admittedly a bit of an idiot, and it is obviously a tacked on claim with no real nuance or addressing of opposing views or sources for even the basic claims (which are completely wrong) at the end of a video about a completely different topic that makes the video seem more relevant and topical than it is.
Thanks for the detailed reply! :P
I’d like to converse with every part of what you pointed out – real discussions are always exciting!
It’s arguably relevant. Researchers pay journals to display their years of work, then these journals resell those years of work to AI companies who send indirect pressure to researchers for more work. It’s a form of labor where the pay direction is reversed. Yes, researchers are aware that their papers can be used for profit (like medical tech) but they didn’t conceive that it would be sold en masse to ethically dubious, historically copyright-violating, pollution-heavy server farms. Now, I see that you don’t agree with this, since you say:
but I can’t help but feel obliged to share the following evidence.
I see you also argue that:
And… I partly agree with you on this. As another commenter said, “[AI] is not going back in the bottle”, so might as well make it not totally hallucinatory. Of course, this should be done in an ethical way, one that respects the rights to the data of all involved.
But about your next point regarding data usage:
That’s a mischaracterization of a lot of views. Yes, a lot of people willfully ignored surveillance capitalism, but we never encouraged it, nor did we ever change our stance from affirmatory to negative because the data we intentionally or inadvertently produced began to be “used for good”. One of the earliest surveillance capitalism investigators, Harvard Business School professor Shoshana Zuboff, confirms that we were just scared and uneducated about these things outside of our control.
This kind of thing – corporate giants giving up thousands of papers to AI – is another instance of people being scared. But it’s not fearmongering. Fearmongering implies that we’re making up fright where it doesn’t really exist; however, there is indeed an awful, fear-inducing precedent set by this action. Researchers now have to live with the idea that corporations, these vast economic superpowers, can suddenly and easily pivot into using all of their content to fuel AI and make millions. This is the same content they spent years on, that they intended for open use in objectively humanity-supporting manners by peers, the same content they had few alternative options for storage/publishing/hosting other than said publishers. Yes, they signed the ToS and now they’re eating it. We’re evolving towards the future at breakneck pace – what’s next? they worry, what’s next?
(Comment 1/2)
Speaking of fearmongering, you note that:
Ignoring the false equivalency between getting inspiration at an art gallery and feeding millions of artworks into a non-human AI for automated, high-speed, dubious-legality replication and derivation, copyright is how creative workers retain their careers and find incentivization. Your Twitter experiences are anecdotal; in more generalized reality:
The above four points were taken from the Proceedings of the 2023 AIII/ACM Conference on AI, Ethics, and Society (Jiang et al., 2023, section 4.1 and 4.2).
Help me understand your viewpoint. Is copyright nonsensical? Are we hypocrites for worrying about the ways our hosts are using our produced goods? There is a lot of liability and a lot of worry here, but I’m having trouble reconciling: you seem to be implying that this liability and worry are unfounded, but evidence seems to point elsewhere.
Thanks for talking with me! ^ᴗ^
(Comment 2/2)
I won’t say that AI is the greatest thing since sliced bread but it is here and it’s not going back in the bottle. I’m glad to see that we’re at least trying to give it accurate information, instead of “look at all this user data we got from Reddit, let’s have searches go through this stuff first!” Then some kid asks if it’s safe to go running with scissors and the LLM says “yes! It’s perfectly fine to run with sharp objects!”
The tech kinda really sucks full stop, but it’ll be marginally better if it’s information is at least accurate.
This could be true if they were to give more weight to academic sources, but I fear it will probably treat them like any other source, so a published paper and some moron on Reddit will still get the same say on wether the Earth is round.
I promise you, they absolutely will treat it as equally valid input data.
Hmm, that makes sense. The toothpaste can’t go back into the tube, so they’re going a bit deeper to get a bit higher.
That does shift my opinion a bit – something bad is at least being made better – although the “let’s use more content-that-wants-to-be-open in our closed-content” is still a consternation.
Not wrong there, it’s one of the things that makes me critical of genai