I see ads for paid prompting courses a bunch. I recommend having a look at this guide page first. It also has some other info about LLMs.

  • Smorty [she/her]OP
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 days ago

    You are completely right and it is mostly about trial and error. I’d assume these courses mainlyl teach things you can do with the big bois, those being by the obvious big evil AI companies. It’s very much an overblown topic and companies pretend it’s actually a fancy thing to learn and be good at.

    The linked guide just explains the basic concepts of few shot prompting, CoT and RAG and stuff. Even these terms I feel, make the topic more complicated than it is. Could literally be summarized to

    • Use examples of what you want
    • Use near-zero temperature for almost everything
    • For complex tasks, tell it to provide its internal thought proccess before providing the answer (or just use the QwQ model)
    • maybe SCREAM AT THE LLM IN ALLCAPS if something is really important
    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      13 days ago

      I skimmed the link you provided. Yes, that seems to include solid advice. Good for beginners, nothing new to me, since I (somewhat) followed the AI hobby enthusiast community since LLaMA1. But I have to look up what writing all caps does, I suppose that severely messes with the tokenizer?! But I’ve seen the big companies do this, too, in some of the leaked prompts.

      And I guess with the “early” models from 2023 and before, it was much more important to get the prompts exactly right. Not confuse it etc. That got way better as models improved substancially, and now these models (at least) get what I want from them almost every time. But I think we picked the low hanging fruits and we can’t expect the models itself to improve as fast as they did in the past. So it’s down to prompting strategies and other methods to improve the performance of chatbots.