I’m new to the field of large language models (LLMs) and I’m really interested in learning how to train and use my own models for qualitative analysis. However, I’m not sure where to start or what resources would be most helpful for a complete beginner. Could anyone provide some guidance and advice on the best way to get started with LLM training and usage? Specifically, I’d appreciate insights on learning resources or tutorials, tips on preparing datasets, common pitfalls or challenges, and any other general advice or words of wisdom for someone just embarking on this journey.
Thanks!
Zworf@beehaw.org 5 months ago
Training your own will be very difficult. You will need to gather so much data to get a model that has basic language understanding.
What I would do (and am doing) is just taking something like llama3 or mistral and adding your own content using RAG techniques.
BaroqueInMind@lemmy.one 5 months ago
OLlama is so fucking slow. Even with a 16-core overclocked Intel on 64Gb RAM with an Nvidia 3080 10Gb VRAM, using a 22B parameter model, the token generation for a simple haiku takes 20 minutes.
xcjs@programming.dev 5 months ago
No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
xcjs@programming.dev 5 months ago
Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)
I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?
Zworf@beehaw.org 5 months ago
Hmmm weird. I have a 4090 / Ryzen 5800X3D and 64GB and it runs really well. Admittedly it’s the 8B model because the intermediate sizes aren’t out yet and 70B simply won’t fly on a single GPU.
But it really screams. Much faster than I can read.
PS: Ollama is just llama.cpp under the hood.