I see ads for paid prompting courses a bunch. I recommend having a look at this guide page first. It also has some other info about LLMs.
I see ads for paid prompting courses a bunch. I recommend having a look at this guide page first. It also has some other info about LLMs.
You are completely right and it is mostly about trial and error. I’d assume these courses mainlyl teach things you can do with the big bois, those being by the obvious big evil AI companies. It’s very much an overblown topic and companies pretend it’s actually a fancy thing to learn and be good at.
The linked guide just explains the basic concepts of few shot prompting, CoT and RAG and stuff. Even these terms I feel, make the topic more complicated than it is. Could literally be summarized to
I skimmed the link you provided. Yes, that seems to include solid advice. Good for beginners, nothing new to me, since I (somewhat) followed the AI hobby enthusiast community since LLaMA1. But I have to look up what writing all caps does, I suppose that severely messes with the tokenizer?! But I’ve seen the big companies do this, too, in some of the leaked prompts.
And I guess with the “early” models from 2023 and before, it was much more important to get the prompts exactly right. Not confuse it etc. That got way better as models improved substancially, and now these models (at least) get what I want from them almost every time. But I think we picked the low hanging fruits and we can’t expect the models itself to improve as fast as they did in the past. So it’s down to prompting strategies and other methods to improve the performance of chatbots.
ooh, leaked prompts? which ones are you talking about?
Seems I forgot to bookmark what I read… There are people telling the chatbots to repeat the previous text, so it outputs the prompt. It got way more complicated than that, but there are a lot of methods and leaked prompts.