Time is money, and my 5+ year old desktop is costing me a heap of it right now. The final straw has come when processing several terabytes of stealer logs which has taken forever. Meanwhile, Stefan has been flying through them with a massive NVMe drive on a fast motherboard.
So, in no particular order, here's what I need it to do:
- Read and write multi-terabyte files fast
- Run SQL Server locally for both development and querying of large data sets (the latter is especially memory intensive)
- Dev environment is largely Visual Studio, SSMS and other (less intensive) tools
- Run a gazillion simultaneous Chrome tabs 😛
And here's my current thinking:
- SSDs (Samsung 9100 PRO?):
- Fast OS drive big enough for Win 11 plus apps
- The biggest possible drive for processing the sorts of files described in the intro
- I'll probably drop an existing 10TB mechanical drive in, purely for storage
- RAM:
- As much as feasible without ridiculous costs (a lot of the data processing is done in-memory)
- Probably don't need pricier ECC memory
- Processor
- I've had Intel but am open to change (Threadripper seems to have got a lot of love lately)
- GPU
- Needs to drive two 2560x1440 screens plus one 5120x1440
- This isn't going to be used for gaming or hash cracking
And before you ask:
- Yes, it will run Windows, not Mac OS or Linux
- No, pushing all this to "the cloud" is not feasible
Suggestions, comments, questions and all else welcome, thanks everyone!

These builds generally look good to me, and are gonna be smoking machines compared to anything from 5 years ago, much less your old mid-level box.
But... what @kltye and you said here:
Yeah. Second spec sounds real light on disk. This is a big ol' deal for your type of workload, esp. the 3x vs 1x disk count. Big files and SQL, which sounds like "analytic" workloads which do high volumes of largely sequential IO, and will likely often be IO bound, even if you have a lot of RAM. And I suspect the disk IO will really matter - with TB-scale files, you're not going to have enough RAM to cache all of that, so I bet you will be hitting the disk frequently and waiting on it. (At work, I do scientific programming and am a SQL Server DBA & developer.)
Having 3x SSDs instead of 1x doesn't just give you more disk space, it gives you faster disk. Because if you put them in RAID0 like @kltye says, that parallelizes your IO across the disks and they're all working on it at the same time, so their IO speeds are additive. (With a bunch of caveats, of course.) You could get 2x or 3x the IO speed. I think you'll want that. At the enterprise level - and you're kinda getting there - making SQL and storage go fast is all about the RAID and multiple "spindles".
The RAM still does matter of course, for reasons like @analytik discussed. Both SQL Server and the OS will be doing caching of file data. Plus the RAM is where all your analytic computation is happening, and is so much faster than disk, if you can keep your whole "working set" for a given task in RAM instead of spilling to disk, that can be a drastic performance difference. But RAM won't make disk IO speed irrelevant.
Yep. MSSQL does a bunch of caching, for both row data and indexes. It's a fundamental part of how it works. It'll cache these things dynamically based on their usage in an LRU-ish + stats + speculative manner. And you can also explicitly set particular things to be pinned in memory.
Filesystem allocation units (block size)
One thing to consider: when you format your disks, you might want to go with an allocation unit size larger than the NTFS default for the data disks. (Not the OS disk!) Can improve analytic workload performance. The default size is more suited for transactional workloads that skip around more.
Won't matter nearly as much as the actual hardware does, but can make some difference for basically no cost.
Running SQL Server
Dunno how experienced you are with SQL Server. Maybe you already know this, but...
Watch out when you're running SQL Server. Databases want RAM. So MSSQL, and SQL databases in general, often assume they basically have the computer dedicated to themself, and will go ahead and hoover up most of the RAM to use for their page cache, and hold on to it. Might want to configure it to limit how much it uses, even switching that around based on what you're doing on a given day.
You'll maybe want to enable data compression (row and maybe page) on your larger tables, and be mindful of the column datatypes and normalization design. That stuff can easily be a 2x to 10x speed difference for analytic workloads. And definitely check out the CLUSTERED COLUMNSTORE table organization, esp. on the most recent versions of MSSQL. Can be a huge win for time series style data.