I typed all this up for someone who posted a… very strangely written question regarding something they noticed with AI, but it appears to be deleted/removed… and, well, I wanna know if I got their question rephrased in a less… difficult to understand format. And then the answer to said question, because I find it interesting as well.
What I typed in response:
After parsing the insanity that is your writing style and… English as a second language? Allow me to confirm and summarize, because I find this question fascinating.
You’ve come across a LLM trend associated with said LLM being given instruction to describe/pretend to be a human named Delilah. LLM’s have gone viral at times for being instructed to formulate their output to sound like famous people with what appears to be resonable accuracy. But what goes into that ability is human words previously written associated with that person (or rather, their full name/titles/etc), as well as purposful restrictions given to the LLM directly (like, don’t output the N word).
Another lesser/totally unquantifiable factor in the output’s “tone” is result of errors in the blackbox algorithm that associates the “words” (not truly words I know, but essentially) in ways that aren’t what you’d expect.
(Here’s where my slight confusion mostly is) Each of these “factors” associated with the tone of the output… you’ve given names to? Or maybe my entirely self-researched knowledge has missed an agreed-upon naming system for these “characters”? I’m not quite sure.
And now your question and qualifers : Is there a pop culture/historic person or character named Delilah who is associated with furry stuff? Because you have been looking at some of the interesting mistaken/innacurate tones adopted by a LLM, and you’ve noticed that asl the LLM to output as if it was Delilah, and the results are furry related. And typically this sort of issue is mostly due to overlapping/similar names in the model’s training (as well as much stranger links without any explanation as to how they formed). And you’re research on “Delilah” hasn’t turned up anything giving reason for the LLM’s furry related output
… is that more or less what you are saying?
The user gave no reason to assume anything of that, nor did my description of the post, and may find the suggestion upsetting. Not going to go all PC 5-0 om you, but did want to distance myself from said assumption.
Sorry, I’m also not a native speaker. I don’t know what PC 5-0 means (political correctness police??). But if we want to know what happened, we need to know the circumstances. It’ll be a big difference which exact LLM model got used. We need to know the exact prompt and text that went in. And then we can start discussing why something happened. I’d say a good chance is the LLM has been made to output stories like that. Like it’s the case with LLM models that have been made for ERP. That’s why I said that.
Oh, and PC 5-0
PC - politically correct (a very… wide term)
5-0 is a colloquial term meaning Police.
Idk how non-native English your internet consumption is, but just straight up saying PC Police… is just something I’d like to not continue the use of.
Alright. Thx for the explanation. Yeah, I don’t have a filter. I just say whatever I think. Don’t really care if it’s offensive, just if things are true or not. Which is hard to tell in this case, since we don’t have enough information at hand. And LLMs are complex. Could be a fluke. Or whatever.
NP.
I watch my own words, but really try to not attempt to stifle some elses. You do you boo boo
Oh, Hmmm, thats a rather interesting route I didn’t think to go down. Most of my interests amd consumed content on AI has been through videos/explanations by people much smarter than I, and not really through use of any LLM’s in any sort of manner except a few exchanges with a few of OpenAi’s models over the last few years. Didn’t even consider that those sorts of things were a common thing.
My limited LLM knowledge does lead me to believe that both interpretations of the question would more or less boil down to the same thing though. A little search engine hunting of my own has also come up empty, and I’m curious if this one of those super interesting and crazy associated token relationships, or if there is just a crapload of content I can’t find.
I don’t think it’s necessary to distance oneself from doing said roleplay. I bet society is looking down on individuals doing it. But I think it’s perfectly fine. As long as it stays somewhat healthy and no one gets harmed.
There is a considerable group of people who do roleplay with AI. Or have “virtual girlfriends” or companions. It all started with Replika AI. Nowadays there are other services for that. And these LLMs are made to be lewd and suggestive. Including all kinds of niche interests. You’ll find several articles about it if you google virtual grilfrends or AI companions. It’s more or less being discussed in some niche areas of the internet, since there is a stigma to it.
Oh, I’d have no shame using that kind of thing, FFS I think having a fursona seems fun and liberating if not for the horrible amount of sweat that has gotta be involved.
I just was trying to say I made my attempt at rephrasing based on not knowing those were really a thing, and that additional possibility/context might have adjusted what I remember reading.
I refuse to Ick anyone’s consensual yum, even the really far out there stuff that isn’t for me, and I hate when others do. Being a transwoman, I’m no stranger to being reduced to a fetish to be icked.
Fuck that shit, and do it in a furry suit if you want lol
Agreed. That’s the spirit. I always don’t get why some people think differently. I mean other people’s life is none of my business. And the core rules to sexuality (and life in general) are very simple: We need consent from all parties and no one should get harmed. And that’s about it. Everything else is kind of individual and we all like different things.