First off, sorry if this is the wrong to community to post to - I’ll move it somewhere else should it not fit the community.

My best friend quite often is a contrarian for the sake of being a contrarian, I feel like. Discussing politics, veganism, the problems with using Amazon, what have you, with him is nigh impossible because he insists on his opinion and won’t budge. I feel like he just feels superior to other people, or at least to me, in a way that he just won’t change his mind, doesn’t hear other sides, and argues for the sake of arguing.

Now, in a recent discussion, I asked him if he knew why images aren’t displayed properly in my Firefox-fork browser (Mull). He gave an answer and asked why I would use a custom browser instead of Firefox itself to which I responded that it’s more privacy-focused and that I didn’t like Mozilla’s implementation of AI in their browser.

Long story short, it devolved into a lengthy discussion about AI, how the fear of AI is based on ignorance and a lack of knowledge, that it’s fine that AI is used for creative projects because in most cases it’s an assisting tool that aids creativity, doesn’t steal jobs etc. essentially that it’s just a tool to be used like a hammer would be.

What pisses me off the most about all this is that he subtly implies that I don’t know enough about the subject to have an opinion on it and that I don’t have any sources to prove my points so they’re essentially void.

How do I deal with this? Whatever facts I name he just shrugs off with “counter”-arguments. I’ve sent him articles that he doesn’t accept as sources. This has been going on for a couple hours now and I don’t know what to tell him. Do you guys have sources I could shove in his face? Any other facts I should throw his way?

Thank you in advance

Edit: A thing to add: I wasn’t trying to convince him that AI itself is bad - there are useful usages of AI that I won’t ignore. What I was concerned about is the way AI is used in any and all products nowadays that don’t need AI to function at all, like some AI-powered light bulbs or whatever; that creative jobs and arts are actively harmed by people scraping data and art from artists to create derivative “art”; that it’s used to influence politics (Trump, Gaza). These things. The way AI is used in its unmonitored way is just dangerous, I feel like

  • moonlight@fedia.io
    link
    fedilink
    arrow-up
    7
    ·
    3 months ago

    I read the first article, and I recommended you do as well, as it’s the best take I have seen on image generation.

    It sounds like you and your friend both have your minds made up already, but reality is more nuanced, and the truth is somewhere inbetween.

    “AI art” isn’t copyright infringement, or “stealing”, but it’s also not art. It’s a neutral technology.

    I agree it is being used unethically (and overused) by corporations, but it’s fundamentally a problem with how our society uses and reacts to it. Like so many other new technologies, the true issue is with capitalism, not the tech itself.

    • Snot Flickerman
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 months ago

      Like so many other new technologies, the true issue is with capitalism, not the tech itself.

      Probably the best single-line reduction of the whole issue here in this thread. Well said.

      However, I do think it’s also cultural in the tech companies. The modern tech culture was borne from an attitude that was 100% rooted in “well the law says we can’t do this, so we’ll do this instead, which is different on a technical and legal level, but achieves the same end-result.”

      This was heavily evident in early piracy, which went from centralized servers of Napster and Kazaa to the decentralized nature of Bittorrent entirely in response to civil suits for piracy. It was an arms race. Soon enough the copyright holders responded by hiring third parties to hide in torrent swarms to be able to log IPs and hit people “associated” with those IPs with suits for sharing trivial amounts of copyrighted data with the third party. That was responded to with private trackers, and eventually, streaming.

      Each step was a technical response to an attempt by society to legally regulate them. Just find a new technical way that’s not regulated yet!

      The modern tech companies never lost that ethos of giving technical responses to route around new legal regulation. Which, in itself, is further enabled by capitalism, as you astutely pointed out.