You can run the real deal big boi R1 671B locally off a fast NVMe SSD even without enough RAM+VRAM to hold the 212GB dynamically quantized weights. No it is not swap
and won't kill your SSD's read/write cycle lifetime. No this is not a distill model. It works fairly well despite quantization (check the unsloth blog for details on how they did that).
The basic idea is that most of the model itself is not loaded into RAM on startup, but mmap'd. Then kv cache will take up some RAM. Most of your system RAM is left available to serve as disk cache for whatever experts/weights are currently most used. I can see the model slow down and cache dump and refill when the model switches over to counting words for example.
It is faster on my system using the GPU, but not by much. It may be overall faster to dedicate the GPU PCIe lanes to more NVMe storage in the theory. Curious if anyone has such a fast read IOPS array to try?
Notes and example generations below.