DeepSeek’s Popular AI App Is Explicitly Sending US Data to China

If anyone here wants to try the full-size model but their hardware is not beefy enough for it, they can try these quantized versions:

https://unsloth.ai/blog/deepseekr1-dynamic

We provide 4 dynamic quantized versions. The first 3 uses an importance matrix to calibrate the quantization process (imatrix via llama.cpp) to allow lower bit representations. The last 212GB version is a general 2bit quant with no calibration done.

MoE Bits Disk Size Quality Link
1.58-bit 131 GB Fair DeepSeek-R1-UD-IQ1_S
1.73-bit 158 GB Good DeepSeek-R1-UD-IQ1_M
2.22-bit 183 GB Better DeepSeek-R1-UD-IQ2_XXS
2.51-bit 212 GB Best DeepSeek-R1-UD-Q2_K_XL

You can view our full R1 collection of GGUF’s including 4-bit, distilled versions & more: huggingface.co/collections/unsloth/deepseek-r1

https://www.reddit.com/r/selfhosted/comments/1ic8zil/yes_you_can_run_deepseekr1_locally_on_your_device/

3 Likes