• 1 Post
  • 354 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • So are image classifier models. They were terrible for several years, and eventually improved. LLMs are pretty good at retrieval augmented generation, which is probably the whole idea.

    A lay person takes a picture of a beetle. They want to know what it is.

    An image classifier correctly identifies it as a long horn, wood boring beetle. It has 5 species with a greater than 80% probably.

    Human written and curated taxonomical descriptions are pulled out of a database.

    An LLM interprets the complicated language of taxonomy, defining terms and asking the user plain language questions about the beetle.

    Maybe this makes the whole process more accessible for lay users. Maybe it helps people understand what questions to be asking for identification. I mean, I’m just guessing at the implementation, but it seems pretty logical to me.




  • Now that it’s a couple days later, I think I might add a thought to this. It can be invalidating, to have someone ask a victim to empathize with the bully without sufficiently recognizing the victims feelings.

    I used to do this sort of thing. I would try to be objective and logical. I learned that this just made my friends feel crazy, like they might be overreacting. I’ve learned to instead start by validating peoples feelings. I try to recognize thier pain, discomfort, and anger first. And, I never blame people for feeling that anger.


  • I second what the other commenters are saying about forgiveness being for you, not to other person, but can I just rant about how useless it is to say no one can truly be bad? It denies the basic utility of words, in my opinion. If someone is an ass, violent, greedy, etc then they are bad. If they change their ways they are good. We have words to describe greedy, violent, assholes. We call them bad people. Hell, a murderer psychopath? Call them evil. It’s why we have adjectives.












  • Really? I mean, it’s melodramatic, but if you went throughout time and asked writers and intellectuals if a machine could write poetry, solve mathmatical equations, and radicalize people effectively t enough to cause a minor mental health crisis, I think they’d be pretty surprised.

    LLMs do expose something about intelligence, which is that much of what we recognize as intelligence and reason can be distilled from sufficiently large quantities of natural language. Not perfectly, but isn’t it just the slightest bit revealing?


  • A child may hallucinate, lie, misunderstand, etc, but we wouldn’t say the foundations of a complete adult are not there, and we wouldn’t assess the child as not conscious. I’m not saying that LLMs are conscious because they say so (they can be made to say anything), but rather that it’s difficult to be confident that humans possess some special spice of consciousness that LLMs do not, because we can also be convinced to say anything.

    LLMs can reason (somewhat unreliably) with a fraction of a human brains compute power while running on hardware that was made for graphics processing. Maybe they are conscious, but only in some pathetically small way, which will only become evident when they scale up, like a child.



  • Why can’t complex algorithms be conscious? In fact, ai can be directed to reason about themselves, context can be made to be persistent, and we can measure activation parameters showing that they are doing so.

    I’m sort of playing devil’s advocate here, but, “Consciousness requires contemplation of self. Which requires the ability to contemplate.” Is subjective, and nearly any ai model, even rudimentary ones, are capable of insisting that they contemplate themselves.