• 8 Posts
  • 4 Comments
Joined 3 years ago
cake
Cake day: March 24th, 2022

help-circle










  • First off, as someone who has programmed GPT stuff since way before ChatGPT, we don’t even need to train our own model. That is overly expensive and unnecessary for our purpose. What is much smarter to do in this case is to take all of the Marxist works and let a chatbot access the contents of the works using semantic search. The way we do this is to convert the works into small chunks which we then convert into embedding vectors. When the user sends a message to the chatbot, the message and the context of the message will be converted into an embedding vector. We then run a dot-product between the message of the user and the chunks of the texts in order to find the most relevant chunks to the question which the user has asked. Then a pre-trained model can make use of the information fetched in order to answer the user’s question.

    Of course, training one’s own model can be good if we want it to be even more accurate and familiar with the material, however a good starting point would be to use semantic search.