Comment on Hexadecimal
How?
By running it locally. The local models don’t have any censorship.
I meant, how does one run it locally. I see a lot of people saying to just “run it locally” but for someone without a background in coding that doesn’t really mean much.
You don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate.
Download Ollama.
Depending on the power of your GPU, run one of the following commands:
DeepSeek-R1-Distill-Qwen-1.5B: ollama run deepseek-r1:1.5b
ollama run deepseek-r1:1.5b
DeepSeek-R1-Distill-Qwen-7B: ollama run deepseek-r1:7b
ollama run deepseek-r1:7b
DeepSeek-R1-Distill-Llama-8B: ollama run deepseek-r1:8b
ollama run deepseek-r1:8b
DeepSeek-R1-Distill-Qwen-14B: ollama run deepseek-r1:14b
ollama run deepseek-r1:14b
DeepSeek-R1-Distill-Qwen-32B: ollama run deepseek-r1:32b
ollama run deepseek-r1:32b
DeepSeek-R1-Distill-Llama-70B: ollama run deepseek-r1:70b
ollama run deepseek-r1:70b
Bigger models means better output, but also longer generation times.
They do by default but like I said it’s open source so you can tweak it to not be.
coldsideofyourpillow@lemmy.cafe 1 week ago
By running it locally. The local models don’t have any censorship.
vvilld@lemmy.world 1 week ago
I meant, how does one run it locally. I see a lot of people saying to just “run it locally” but for someone without a background in coding that doesn’t really mean much.
coldsideofyourpillow@lemmy.cafe 1 week ago
You don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate.
Download Ollama.
Depending on the power of your GPU, run one of the following commands:
DeepSeek-R1-Distill-Qwen-1.5B:
ollama run deepseek-r1:1.5b
DeepSeek-R1-Distill-Qwen-7B:
ollama run deepseek-r1:7b
DeepSeek-R1-Distill-Llama-8B:
ollama run deepseek-r1:8b
DeepSeek-R1-Distill-Qwen-14B:
ollama run deepseek-r1:14b
DeepSeek-R1-Distill-Qwen-32B:
ollama run deepseek-r1:32b
DeepSeek-R1-Distill-Llama-70B:
ollama run deepseek-r1:70b
Bigger models means better output, but also longer generation times.
Charlxmagne@lemmy.world 1 week ago
They do by default but like I said it’s open source so you can tweak it to not be.