- cross-posted to:
- 404media@rss.ponder.cat
- cross-posted to:
- 404media@rss.ponder.cat
How are we supposed to police this? A vast global Orwellian monitoring system seems even worse.
Don’t let the needles of truth be a casulty within the haystacks of spam by censoring anything, but instead offer website filtering blacklists or spam-filtering options on open communication forums (fyi, new twitter doesn’t quite count as it still censors legal content), but be sure to include another feed that can show everything that was ‘filtered out’ so that the user accounts posting spam and the accuracy of the filter itself can be audited if need be (whether it’s a filter made by a user’s trusted curators flagging stuff on their own behalf and sharong that filtering, or also some automated algorithm(s) that picks up on the particular user’s feedback from the ‘hide post’ and ‘report post’ interactions.
As long as the option for ‘non-association’ is not being blocked, then no policing of spam is EVER necessary, unless it is evil trying to gain that power to police non-spam too so they can get away with their evil. So prepare and expect that evil is going to go after voluntary spam filtering platforms to grant themself the excuse that ‘information policing is necessary’ to excuse their policing of information that exposes them for what they are.
Edit: illegal stuff eg. doxxing, being left uncensored problem needs to be handled in another way than spam filtering; where warrants can be made for a crime commited on the platform to the platform admin; this is in both cases of a centralised platform domain or federated instance domain. Then the crime can be auto-filtered and criminal address can be attained.
The same way you police photoshopped images, you don’t because it isn’t an issue.
The various governments promoting this as a problem in need of policing are simply manufacturing consent because they want to implement a vast global Orwellian monitoring system.
I can forgive some of the uses for text based generative AI. There are definitely some use cases that I have seen. But can we all agree that image and video generation has been, like, 99% a bad thing? Since its inception I have yet to see a use that wasn’t inflammatory, ragebait, non-consensual porn, or used to scam or trick people.
Replace ai with photoshop, and your comment would fit in well 2 decades ago.
I enjoy some of the video use. The Why Files use it. There’s without a doubt a lot of negative uses but there’s some good legitimate ones too.
Pretty sure the word is “rape”.
It’s Jeopardy you have to say, “What is” before your answer.