

Ok my 20 and your 20 are not the same.
I was saying the large numbers didn’t make sense if you don’t have a large fleet of drives. Say you have ten servers, each with ten drives, and the MTBF is 100 million hours (yay, easy math!). That means that half your drives will have failed after 100k hours, or 11 years of use.
Some of the sites I have been looking at are saying that this number will increase significantly because 8 hours of daily use would give you about 33 years of use.
I think I like the annualized failure rate better, but I don’t think either really tell a great picture.
https://www.seagate.com/support/kb/hard-disk-drive-reliability-and-mtbf-afr-174791en/
https://ssdcentral.net/hddfail/
I would rather if the annualized rate were recalculated annually.
Regarding the controllers, that has been nagging at me this whole conversation. Most SATA peripheral cards do not have heat sinks, but most SAS cards do. The SAS cards at least have a more rugged appearance.









I’m not connected enough not to go down with everyone else using polymarket.