I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.
I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.
In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.
P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!
What I’m using is Text Generation WebUI with an 11B GGUF model from Huggingface. I offloaded all layers to the GPU, which uses about 9GB of VRAM. With GGUF models, you can choose how many layers to offload to the GPU, so it uses less VRAM. Layers that aren’t offloaded use system RAM and the CPU, which will be slower.
Probably better to ask on !localllama@sh.itjust.works. Ollama should be able to give you a decent LLM, and RAG (Retrieval Augmented Generation) will let it reference your dataset.
The only issue is that you asked for a smart model, which usually means a larger one, plus the RAG portion consumes even more memory, which may be more than a typical laptop can handle. Smaller models have a higher tendency to hallucinate - produce incorrect answers.
Short answer - yes, you can do it. It’s just a matter of how much RAM you have available and how long you’re willing to wait for an answer.
I’m in the early stages of this myself and haven’t actually run an LLM locally but the term that steered me in the right direction for what I was trying to do was ‘RAG’ Retrieval-Augmented Generation.
ragflow.io (terrible name but good product) seems to be a good starting point but is mainly set up for APIs at the moment though I found this link for local LLM integration and I’m going to play with it later today. https://github.com/infiniflow/ragflow/blob/main/docs/guides/deploy_local_llm.md
deleted by creator
deleted by creator
I’d recommend trying LM Studio (https://lmstudio.ai/). You can use it to run language models locally. It has a pretty nice UI and it’s fairly easy to use.
I will say, though, that it sounds like you want to feed perhaps a large number of tokens into the model, which will require a model made for a large context length and may require a pretty beefy machine.
You need more than a llm to do that. You need a Cognitive Architecture around the model that include RAG to store/retrieve the data. I would start with an agent network (CA) that already includes the workflow you ask for. Unfortunately I don’t have a name ready for you, but take a look here: https://github.com/slavakurilyak/awesome-ai-agents
It depends on the exact specs of your old laptop. Especially the amount of RAM and VRAM on the graphics card. It’s probably not enough to run any reasonably smart LLM aside from maybe Microsoft’s small “phi” model.
So unless it’s a gaming machine and has 6GB+ of VRAM, the graphics card will probably not help at all. Without, it’s going to be slow. I recommend projects that are based on llama.cpp or use it as a backend, for that kind of computers. It’s the best/fastest way to do inference on slow computers and CPUs.
Furthermore you could use online-services or rent a cloud computer with a beefy graphics card by the hour (or minute.)
Reasonable smart… that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They’re rather impressive for their size.
For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.
And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I’d say right half a gig to a gig of VRAM.
As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.
So no, you’re not loading all the notes directly, and you won’t have a smart model.
For your hardware and use case… try phi3-mini with a RAG system as a start.
There’s a few.
Very easy if you set it up with Docker.
Best is probably just ollama and use danswer as a frontend. Danswer will do all the RAG stuff for you. Like managing / uploading documents and so on
Ollama is becoming the standard selfnhosted LLM. And you can add any models you want / can fit.
https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image
I watched NetworkChucks tutorial and just did what he did but on my Macbook. Any recent Macbook(M-series) will suffice. https://youtu.be/Wjrdr0NU4Sk?si=myYdtKnt_ks_Vdwo
Here is an alternative Piped link(s):
https://piped.video/Wjrdr0NU4Sk?si=myYdtKnt_ks_Vdwo
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
You would need 24gb vram card to even start this thing up. Prolly would yield shiti results
They didn’t even mention a specific model. Why would you say they need 24gb to run any model? That’s just not true.
I didnt say any. Based on what he is asking, he can’t just run this shit on an old laptop.
Jan.ai might be a good starting point or ollama? There’s https://tales.fromprod.com/2024/111/using-your-own-hardware-for-llms.html which has some guidance for using jan.ai for both server and client
The easiest way to run local LLMs on older hardware is Llamafile https://github.com/Mozilla-Ocho/llamafile
For non-nvidia GPUs, webgpu is the way to go https://github.com/abi/secret-llama