Comment on Advice - Getting started with LLMs
xcjs@programming.dev 7 months agoNo offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
kiku123@feddit.de 7 months ago
I agree. My 3070 runs the 8B Llama3 model in about 250ms, especially for short responses.