I meant, how does one run it locally. I see a lot of people saying to just “run it locally” but for someone without a background in coding that doesn’t really mean much.
Comment on Hexadecimal
coldsideofyourpillow@lemmy.cafe 2 months agoBy running it locally. The local models don’t have any censorship.
vvilld@lemmy.world 2 months ago
coldsideofyourpillow@lemmy.cafe 2 months ago
You don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate.
-
Download Ollama.
-
Depending on the power of your GPU, run one of the following commands:
-
DeepSeek-R1-Distill-Qwen-1.5B:
ollama run deepseek-r1:1.5b
-
DeepSeek-R1-Distill-Qwen-7B:
ollama run deepseek-r1:7b
-
DeepSeek-R1-Distill-Llama-8B:
ollama run deepseek-r1:8b
-
DeepSeek-R1-Distill-Qwen-14B:
ollama run deepseek-r1:14b
-
DeepSeek-R1-Distill-Qwen-32B:
ollama run deepseek-r1:32b
-
DeepSeek-R1-Distill-Llama-70B:
ollama run deepseek-r1:70b
Bigger models means better output, but also longer generation times.
-
Charlxmagne@lemmy.world 2 months ago
They do by default but like I said it’s open source so you can tweak it to not be.