“The two models, the 30TB … and the 32TB …, each offer a minimum of 3TB per disk”. Well, yes, I would hope something advertised as being 30TB would offer at least 3TB. Am I misreading this sentence somehow?
They probably mean the hard drive has 10 platters, each containing at least 3TB.
It never ceases to amaze me how far we can still take a piece of technology that was invented in the 50s.
That’s how most technology is:
- combustion engines - early 1900s, earlier if you count steam engines
- missiles - 13th century China, gunpowder was much earlier
- wind energy - windmills appeared in the 9th century, potentially as early as the 4th
Almost everything we have today is due to incremental improvements from something much older.
This isn’t unique to computing.
Just about all of the products and technology we see are the results of generations of innovations and improvements.
Look at the automobile, for example. It’s really shaped my view of the significance of new industries; we could be stuck with them for the rest of human history.
Solid state is kinda like a microscopic punch card.
More like microscopic fidget bubble poppers.
When the computer wants a bit to be a 1, it pops it down. When it wants it to be a 0, it pops it up.
If it were like a punch card, it couldn’t be rewritten as writing to it would permanently damage the disc. A CD-RW is basically a microscopic punch card though, because the laser actually burns away material to write the data to the CD.
They work through electron tunneling through a semiconductor, so something does go through them like an old punch card reader
Current ones also store multiple charge levels per cell, so they’re no longer one bit each. They have multiple levels of “punch” for what used to just be one bit.
So are optical discs
Much more so than solid state.
Talking about steam, steam-powered things are 2 thousand years old at least and we still use the technology when we crack atoms to make energy.
What the Romans had wasn’t comparable with an industrial steam engine. The working principle of steam pushing against a cylinder was similar, but they lacked the tools and metallurgy to build a steam cauldron that could be pressurized, so their steam engine could only do parlor tricks like opening a temple door once, and not perform real continuous work.
radarr goes brrrrrr
barrrr?
…dum tss!
sonarr goes brrrrrr…
I can’t wait for datacenters to decommission these so I can actually afford an array of them on the second-hand market.
Home Petabyte Project here I come (in like 3-5 years 😅)
better start preparing with a 10G network!
Way ahead of you… I have a Brocade ICX6650 waiting to be racked up once I’m not limited to just the single 15A circuit my rack runs off of currently 😅
Hopefully 40G interconnect between it and the main switch everything using now will be enough for the storage nodes and the storage network/VLAN.
Exactly, my nas is currently made up of decommissioned 18tb exos. Great deal and I can usually still get them rma’d the handful of times they fail
Where is a good place to search for decommissioned ones?
eBay sellers that have tons of sales and specialize. You can learn to read between the lines and see that decom goods are what they do.
SaveMyServer is a perfect example. Don’t know if they sell drives though.
Serverpartdeals has done me well, drives often come new enough that they still have a decent amount of manufacturers warranty remaining (exos is 5yr) and depending on the drive you buy from them spd will rma a drive for 5 years from purchase (but not always, depends on the listing, read the fine print).
I have gotten 2 bad drives from them out of 18 over 5 years or so. Both bad drives were found almost immediately with basic maintenance steps prior to adding to the array (zeroing out the drives, badblocks) and both were rma’d by seagate within 3-5 days because they were still within the mfr warranty.
If you’re running a gigantic raid array like me (288tb and counting!) it would be wise to recognize that rotational hard drives are doomed and you need a robust backup solution that can handle gigantic amounts of data long term. I have a tape drive for that because I got it cheap at an electronics recycler sold as not working (thankfully it was an easy fix) but this is typically a super expensive route. If you only have like 20tb then you can look into stuff like cloud services, bluray, redundant hard drive, etc. or do like I did in the beginning and just accept that your pirated anime collection might go poof one day lol
What kind of tape drive are you using? My array isn’t as large as yours (120tb physical), but it’s big enough that my only real options for backup are tape or a whole secondary array for just backup.
Based on what I’ve seen, my options are a prohibitively large number tapes with an older LTO standard or prohibitively expensive tapes with a newer LTO standard.
My current backup strategy consists of automated backups to Backblaze B2 for the really important stuff like personal documents or projects and hoping my ZFS array doesn’t fail for everything else.
I have an ibm qualstar lto8 drive. I got it because I gambled, it was cheap because it was throwing an error (I forget what the number was) but it was one that indicates an issue in the tape path. I was able to get the price to $150 because I was buying some other stuff and because ultimately if the head was toast it was basically useless. But I got lucky and cleaning the head and tape path brought it back to life. Dunno how long it will last. I’ll live with it though because buying one that’s confirmed working can be thousands
You’re right that lto8 tapes are pricey but they’re quite a bit cheaper than building an equivalent array for backup that is significantly more reliable long term. A tape is about 12tb and $40-50, although sometimes they pop up cheaper. I generally don’t back up stuff continually with this method, I back up newer files that haven’t been synced to tape once every six weeks or so. It’s also something that you can buy a bit at a time to soften the financial blow of course. Maybe if you get a fancy carousel drive you’d want to fill it up but frankly that just seems like it would break much easier
More modern tapes have support for ltfs and I can basically use it like an external hard drive that way. So it’s pretty much I pop a tape in, once a week or so I sync new files to said tape, then as it gets full I swap it for a new tape. Towards the end I print a directory of what’s on it because admittedly doing it this way is messy. But my intention with this is to back up my “medium critical” files. Stuff that if I lost I would be frustrated over, but not heartbroken. Movies and TV shows that I did custom muxes of to have my ideal subtitles, audio tracks, etc. all my dockers so stuff like my Jellyfin watch status and komga library stay intact, stuff like that. That takes up the bulk of my nas and my primary concerns are either the array fully failing or significant bit rot, and if either of those occur I would rebuild from scratch and just copy all the tapes back over anyway so the messy filing isn’t really a huge issue.
I also do sometimes make it a point to copy harder to find files onto at least 2 tapes on the outside chance a tape goes bad. It’s unlikely given I only buy new tapes and store them properly (I even go to the effort to store them offsite just in case my house burns down) but you never know I suppose
The advertised values of tape capacity is crap for this use. You’ll see like lto 8 has a native capacity of 12tb but a compressed capacity of 30tb per disk! And the disks will frequently just say 30tb on them. That’s nonsense here. Maybe for a more typical server environment where they’re storing databases and text files and shit but compressed movies and music? Not so much. I get some advantage because I keep most of my stuff in archival quality (remux/flac/etc) but even then I still usually dont get anywhere near 30tb
It’s pretty slow. Not the end of the world but just something to keep in mind. Lto8 is supposed to be 360MBps for uncompressed and 750MBps for compressed data but I don’t seem to hit those speeds at all. I’m not really in a rush though and everything verifies fine and works after copying back over so I’m not too worried. But it can take like 10-14 hours to fill a tape. If I ever do have to rebuild the array it will take AGES
For my “absolutely priceless” data I have other more robust backup solutions that are basically the same as yours (literally down to using backblaze, ha).
You got an incredible deal on your tape drive. For LTO8 drives, I’m seeing “for parts only” drives sold for around $500. I’d be willing to throw away $100 or $200 on the possibility that I could repair a drive; $500 is a bit too much. It looks like LTO6 is more around what my budget would be.; it would require a much larger number of tapes, but not excessively so.
I remember when BD-R was a reasonable solution for backup. There’s no way that’s true now. It really seems like hard drive capacity has far outpaced removable media. If most people are streaming everything, those of us who actually want to save their data locally are really the minority these days. There’s just not as much of a compelling reason for companies to develop cheap high-capacity removable discs.
I’m sure I’ll invest in a tape backup solution eventually, but for now, at least I have ZFS with paranoid RAIDZ.
Nice, where do you get yours?
also curious, buying new is getting too pricey for me
I personally use goharddrive and serverpartdeals on eBay and have had good luck, but I’m always looking for others
Never used goharddrive but can def endorse spd
Just one would be a great backup, but I’m not ready to run a server with 30TB drives.
I’m here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.
This would net around 180TB in that form factor. Thats would go a long way for a long while.
I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can’t imagine what it’d be like with 30 TB disks.
A few years ago I had a 12 disk RAID6 array and the power distributor (the bit between the redundant PSUs and the rest of the system) went and took 5 drives with them, lost everything on there. Backup is absolutely essential but if you can’t do that for some reason at least use RAID1 where you only lose part of your data if you lose more than 2 drives.
Yeah I agree. I just got 20tb in mine. Decided to just z2, which in my case should be fine. But was contemplating the same thing. Going to have to start doing z2 with 3 drives in each vdev lol.
Is RAID2 ever the right choice? Honestly, I don’t touch anything outside of 0, 1, 5, 6, and 10.
Edit: missed the z, my bad. Raidz2 is fine.
raidz2 is analogous to RAID 6. It’s just the ZFS term for double parity redundancy.
Everybody taking shit about Seagate here. Meanwhile I’ve never had a hard drive die on me. Eventually the capacity just became too little to keep around and I got bigger ones.
Oldest I’m using right now is a decade old, Seagate. Actually, all the HDDs are Seagate. The SSDs are Samsung. Granted, my OS is on an SSD, as well as my most used things, so the HDDs don’t actually get hit all that much.
Yeah, same. I switched to seagate after 3 WD drives failed in less then 3 years. Never had problems since.
I’ve had a Samsung SSD die on me, I’ve had many WD drives die on me (also the last drive I’ve had die was a WD drive), I’ve had many Seagate drives die on me.
Buy enough drives, have them for a long enough time, and they will die.
Seagate had some bad luck with their 3TB drives about 15 years ago now if memory serves me correctly.
Since then Western Digital (the only other remaining HDD manufacturer) pulled some shenanigans with not correctly labeling different technologies in use on their NAS drives that directly impacted their practicality and performance in NAS applications (the performance issues were particularly agregious when used in a zfs pool)
So basically pick your poison. Hard to predict which of the duopoly will do something unworthy of trusting your data upon, so uh…check your backups I guess?
Had good impressions and experiences with Toshiba drives. Chugged along quiet nicely.
Ah I thought I had remembered their hard drive division being aquired but I was wrong! Per Wikipedia:
At least 218 companies have manufactured hard disk drives (HDDs) since 1956. Most of that industry has vanished through bankruptcy or mergers and acquisitions. None of the first several entrants (including IBM, who invented the HDD) continue in the industry today. Only three manufacturers have survived—Seagate, Toshiba and Western Digital
Yeah our file server has 17 Toshiba drives in the 10/14 TiB sizes ranging from 2-4 years of power-on age and zero failures so far (touch wood).
Of our 6 Seagate drives (10 TiB), 3 of them died in the 2-4 year age range, but one is still alive 6 years later.
We’re in Japan and Toshiba is by far the cheapest here (and have the best support - they have advance replacement on regular NAS drives whereas Seagate takes 2 weeks replacement to ship to and from a support center in China!) so we’ll continue buying them.
That decade old one is 3TB. 😅
Unfortunately, I have about 10 dead 3TB drives sitting around in my closet. I took the sacrifice so you don’t have to :-)
Thanks. 👍
at least you have a bunch of nice coasters and cool magnets now.
I had 3 drives from seagate (including 1 enterprise) that died or got file-corruption issues when I gave up and switched to SSDs entirely…
These things are unreliable, I had 3 seagate HDDs in a row fail on me. Never had an issue with SSDs and never looked back.
Seagate in general are unreliable in my own anecdotal experience. Every Seagate I’ve owned has died in less than five years. I couldn’t give you an estimate on the average failure age of my WD drives because it never happened before they were retired due to obsolescence. It was over a decade regularly though.
well until you need capacity why not use an SSD. It’s basically mandatory for the operating system drive too
Capacity for what?. There are 4tb SSD m.2 costing $200 bucks cmon…
I would rather not buy so large SSDs. for most stuff the performance advantage is useless while the price is much larger, and my impression is still that such large SSDs have a shorter lifespan (regarding how many writes will it take to break down). recovering data fron a failing HDD is also easier: SSDs just turn read-only or completely fail at one point, in the latter case often even data recovery companies being unable to recover anything, while HDDs will often give signs that a good monitoring software can detect weeks or months before, so that you know to be more cautious with it
How is it easier? Do you open your HDDs and take info from there? Do you have specialized equipment and knowledge? Second, if you detect on smart that you are closer to TBW, change the SSD duh… Smart is a lot more effective on SSDs depending the model it even gives you time to live…
That’s good, really good news, to see that HDDs are still being manufactured and being thought of. Because I’m having a serious problem trying to find a new 2.5’ HDD for my old laptop here in Brazil. I can quickly find SSDs across the Brazilian online marketplaces, and they’re not much expensive, but I’m intending on purchasing a mechanical one because SSDs won’t hold data for much longer compared to HDDs, but there are so few HDD for sale, and those I could find aren’t brand-new.
SSDs won’t hold data for much longer compared to HDDs
Realistically this is not a good reason to select SSD over HDD. If your data is important it’s being backed up (and if it’s not backed up it’s not important. Yada yada 3.2.1 backups and all. I’ll happily give real backup advise if you need it)
In my anecdotal experience across both my family’s various computers and computers I’ve seen bite the dust at work, I’ve not observed any longevity difference between HDDs and SSDs (in fact I’ve only seen 2 fail and those were front desk PCs that were effectively always on 24/7 with heavy use during all lobby hours, and that was after multiple years of that usecase) and I’ve never observed bit rot in the real world on anything other than crappy flashdrives and SD cards (literally the lowest quality flash you can get)
Honestly best way to look at it is to select based on your usecase. Always have your boot device be an SSD, and if you don’t need more storage on that computer than you feel like buying an SSD to match, don’t even worry about a HDD for that device. HDDs have one usecase only these days: bulk storage for comparatively low cost per GB
I replaced my laptop’s DVD drive with a HDD caddy adapter, so it supports two drives instead of just one. Then, I installed a 120G SSD alongside with a 500G HDD, with the HDD being connected through the caddy adapter. The entire Linux installation on this laptop was done in 2019 and, since then, I never reinstalled nor replaced the drives.
But sometimes I hear what seems to be a “coil whine” (a short high pitched sound) coming from where the SSD is, so I guess that its end is near. I have another SSD (240G) I bought a few years ago, waiting to be installed but I’m waiting to get another HDD (1TB or 2TB) in order to make another installation, because the HDD was reused from another laptop I had (therefore, it’s really old by now, although I had no I/O errors nor “coil whinings” yet).
Back when I installed the current Linux, I mistakenly placed
/var
and/home
(and consequently,/home/me/.cache
and/home/me/.config
, both folders of which have high write rates because I use KDE Plasma) on the SSD. As the years passed by, I realized it was a mistake but I never had the courage to relocate things, so I did some “creative solutions” (“gambiarra”) such as creating a symlinked folder for.cache
and.config
, pointing them to another folder within the HDD.As for backup, while I have three old spare HDDs holding the same old data (so it’s a redundant backup), there are so many (hundreds of GBs) new things I both produced and downloaded that I’d need lots of room to better organize all the files, finding out what is not needed anymore and renewing my backups. That’s why I was looking for either 1TB or 2TB HDDs, as brand-new as possible (also, I’m intending to tinker more with things such as data science after a fresh new installation of Linux). It’s not a thing that I’m really in a hurry to do, though.
Edit: and those old spare HDDs are 3.5" so they wouldn’t fit the laptop.
I doubt the high pitched whine that you’re hearing is the SSD failing. The sheer amount of writes to fully wear out an SSD is…honestly difficult to achieve in the real world. I’ve got decade old budget SSDs in some of my computers that are still going strong!
Dude i had a 240 gb ssd 14 years old. And the SMART is telling me that has 84% life yet. This was a main OS drive and was formatted multiple times. Literally data is going to be discontinued before this disk is going to die. Stop spreading fake news. Realistically how many times you fill a SSD in a typical scenario?
As per my previous comment, I had
/var
,/var/log
,/home/me/.cache
, among many other frequently written directories on the SSD since 2019. SSDs have fewer write cycles than HDDs, it’s not “fake news”.“However, SSDs are generally more expensive on a per-gigabyte basis and have a finite number of write cycles, which can lead to data loss over time.”
(https://en.wikipedia.org/wiki/Solid-state_drive)
I’m not really sure why exactly mine it’s coil whining, it happens occasionally and nothing else happens aside from the high-pitched sound, but it’s coil whining.
How the hell a SSD can coil whine… Without mobile parts lol… Second, realistically for a normal user, it’s probable that SSD is going to last more than 10 years. We aren’t talking about intensive data servers here. We are talking about The hardcorest of the gamers for example, normal people. And of course, to begin with HDDs haven’t a write limit lol. They fail because of its mechanical parts. Finally, cost benefit. The M.2 I was suggesting is $200 buck for 4Tb. Cmon it’s not the end of the world and you multiply speeds… By 700…
How the hell a SSD can coil whine… Without mobile parts lol…
Do you even know what “coil whine” is? It has nothing to do with moving parts! “Coil whine” is a physical phenomenon which happens when electrical current makes an electronic component, such as an inductor, to slightly vibrate, emitting a high-pitched sound. It’s a well-known phenomenon for graphic cards (whose only moving part is the cooler, not the source of their coil whinings). SSDs aren’t supposed to make coil whines, and that’s why I’m worried about the health of mine.
Finally, cost benefit. The M.2 I was suggesting is $200 buck for 4Tb. Cmon it’s not the end of the world and you multiply speeds… By 700…
I’m not USian so pricing and cost benefits may differ. Also, the thing is that I already have another SSD, a 240G SSD. I don’t need to buy another one, I just need a HDD which is what I said in my first comment. Just it: a personal preference, a personal opinion regarding personal experiences and that’s all. The only statement I said beyond personal opinions was regarding the life span which I meant the write rate thing. But that’s it: personal opinion, no need for ranting about it.
Just a reminder: These massive drives are really more a “budget” version of a proper tape backup system. The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.
So a decent choice for the big machine you backup all your VMs to in a corporate environment. Not a great solution for all the anime you totally legally obtained on Yahoo.
Not sure if the general advice has changed, but you are still looking for a sweet spot in the 8-12 TB range for a home NAS where you expect to regularly access and update a large number of small files rather than a few massive ones.
Not sure what you’re going on about here. Even these discs have plenty of performance for read/wrote ops for rarely written data like media. They have the same ability to be used by error checking filesystems like zfs or btrfs, and can be used in raid arrays, which add redundancy for disc failure.
The only negatives of large drives in home media arrays is the cost, slightly higher idle power usage, and the resilvering time on replacing a bad disc in an array. Your 8-12TB recommendation already has most of these negatives. Adding more space per disc is just scaling them linearly.
Additionally, most media is read in a contiguous scan. Streaming media is very much not random access.
Your typical access pattern is going to be seeking to a chunk, reading a few megabytes of data in a row for the streaming application to buffer, and then moving on. The ~10ms of access time at the start are next to irrelevant. Particularly when you consider that the OS has likely observed that you have unutilized RAM and loads the entire file into the memory cache to bypass the hard drive entirely.
Oh hey, I did something right. That’s kinda neat
I am troubled in my heart. I would not have been told so in this way.
honestly curious, why the hell was this downvoted? I work in this space and I thought this was still the generally accepted advice?
Not a great solution for all the anime you totally legally obtained on Yahoo.
Mainly because of that. Spinning rust drives are perfect for large media libraries.
There isn’t a hard drive made in the last 15 years that couldn’t handle watching media files. Even the SMR crap the manufacturers introduced a while back could do that without issue. For 4k video you’re going to see average transfer speeds of 50MB/s and peak in the low 100MB/s range, and that’s for high quality videos. Write speed is irrelevant for media consumption, and unless your hard drive is ridiculously fragmented, seek speed is also irrelevant. Even an old 5400 RPM SATA drive is going to be able to handle that load 99.99% of the time. And anything lower than 4K video is a slam dunk.
Everything I just said goes right out the window for a multi-user system that’s streaming multiple media files concurrently, but the vast majority of people never need to worry about that.
Because everything he said was wrong?
Because people are thinking through specific niche use cases coupled with “Well it works for me and I never do anything ‘wrong’”.
I’ll definitely admit that I made the mistake of trying to have a bit of fun when talking about something that triggers the dunning kruger effect. But people SHOULD be aware of how different use patterns impacts performance, how that performance impacts users, and generally how different use patterns impact wear and tear of the drive.
Come on man, everything, and mean everything you said is wrong.
Budget tape backup?
No, you can’t even begin to compare drives to tape. They’re completely different use cases. A hard drive can contain a backup but it’s not physically robust to be unplugged, rotated off site , and put into long term storage like tape. You might as well say a Honda Accord is a budget Semi tractor trailer.
Then you specifically called out personal downloads of anime as a bad use case. That’s absolutely wrong in all cases.
It is absurd to imply that everyone else except for you is less knowledgeable and using a niche case except you.
HDD read rates are way faster than media playback rates, and seek times are just about irrelevant in that use case. Spinning rust is fine for media storage. It’s boot drives, VM/container storage, etc, that you would want to have on an SSD instead of the big HDD.
So I’m guessing you don’t really know what you’re talking about.
I’m real curious why you say that. I’ve been designing systems with high IOPS data center application requirements for decades so I know enterprise storage pretty well. These drives would cause zero issues for anyone storing and watching their media collection with them.
The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.
It’s no ssd but is no slower than any other 12TB drive. It’s not shingled but HAMR. The sectors are closer together so it has even better seeking speed than a regular 12TB drive.
Not a great solution for all the anime you totally legally obtained on Yahoo.
???
It’s absolutely perfect for that. Even if it was shingled tech, that only slows write speeds. Unless you are editing your own video, write seek times are irrelevant. For media playback use only consistent read speed matters. Not even read seek matters except in extreme conditions like comparing tape seek to drive seek. You cannot measure 10 ms difference between clicking a video and it starting to play because of all the other delays caused by media streaming over a network.
But that’s not even relevant because these have faster read seeking than older drives because sectors are closer together.
I mean, cool and all, but call me when sata or m2 ssds are 10TB for $250, then we’ll talk.
Not sure whether we’ll arrive there the tech is definitely entering the taper-out phase of the sigmoid. Capacity might very well still become cheaper, also 3x cheaper, but don’t, in any way, expect them to simultaneously keep up with write performance that ship has long since sailed. The more bits they’re trying to squeeze into a single cell the slower it’s going to get and the price per cell isn’t going to change much, any more, as silicon has hit a price wall, it’s been a while since the newest, smallest node was also the cheapest.
OTOH how often do you write a terabyte in one go at full tilt.
I don’t think anyone has much issue with our current write speeds, even at dinky old SATA 6/GB levels. At least for bulk media storage. Your OS boot or game loading, whatever, maybe not. I’d be just fine with exactly what we have now, but just pack more chips in there.
Even if you take apart one of the biggest, meanest, most expensive 8TB 2.5" SSD’s the casing is mostly empty inside. There’s no reason they couldn’t just add more chips even at the current density levels other than artificial market segmentation, planned obsolescence, and pigheadedness. It seems the major consumer manufacturers refuse to allow their 2.5" SSD’s to get out of parity with the capacities on offer in the M.2 form factor drives that everyone is hyperfixated on for some reason, and the pricing structure between 8TB and what few greater than 8 models actually are on offer is nowhere near linear even though the manufacturing cost roughly should be.
If people are still willing to use a “full size” 3.5" form factor with ordinary hard drives for bulk storage, can you imagine how much solid state storage you could cram into a casing that size, even with current low-cost commodity chips? It’d be tons. But the only options available are “enterprise solutions” which are apparently priced with the expectation you’ll have a Fortune 500 or government expense account.
It’s bullshit all the way down; there’s nothing new under the sun in that regard.
the M.2 form factor drives that everyone is hyperfixated on for some reason
The reason is transfer speeds. SATA is slow, M.2 is a direct PCIe link. And SSDs can saturate it, at least in bursts. Doubling the capacity of a 2.5" SSD is going to double its price as you need twice as many chips, there’s not really a market for 500 buck SATA SSDs, you’re looking for U.2 / U.3 ones. Yes, they’re quite a bit more expensive per TB but look at the difference in TBW to consumer SSDs.
If you’re a consumer and want a data grave, buy spinning platters. Or even a tape drive. You neither want, nor need, a high-capacity SSD.
Also you can always RAID them up.
For the context of bulk consumer storage (or even SOHO NAS) that’s irrelevant, though, because people are already happily using spinning mechanical 3.5" hard drives for this purpose, and they’re all already SATA. Therefore there’s no logical reason to worry about the physical size or slower write speeds of packing a bunch of flash chips into the same sized enclosure for those particular use cases.
There are reasons a big old SSD would be suitable for this. Silence, reliability, no spin up delay, resistance to outside mechanical forces, etc.
cool never will buy another seagate ever though.
Same but western digital, 13gb that failed and lost all my data 3 time and 3rd time was outside the warranty! I had paid 500$, the most expensive thing I had ever bought until tgat day.
Great, can’t wait to afford one in 2050.
Fleebay? Yup, me too!
$4.99 for the drive plus $399.00 s&h
How many platters?!
30 to 32 platters. You can write a file on the edge and watch it as it speeds back to the future!
Lmao the HDD in the first machine I built in the mid 90s was 1.2GB
Our first computer was a Macintosh Classic with a 40 MB SCSI hard disk. My first “own” computer had a 120 MB drive.
I keep typoing TB as GB when talking about these huge drives, it’s just so weird how these massive capacities are just normal!
Back then that was very impressive!
Yup. My grandpa had 10 MB in his DOS machine back then.
My dad had a 286 with a 40MB hard drive in it. When it spun up it sounded like a plane taking off.
It really was doubling in speed about every 18 months.
My 286er had 2MB RAM and no hard drive, just two 5.25" floppy drives. One to boot the OS from, the other for storage and software.
I upgrade it to 4 MB RAM and bought a 20 MB hard drive, moved EVERY piece of software I had onto it, and it was like 20% full. I sincerely thought that should last forever.
Today I casually send my wife a 10 sec video from the supermarket to choose which yoghurt she wants and that takes up about 25 MB.
I had 128KB of RAM and I loaded my games from tape. And most of those only used 48KB of it.
Yeah we still had an old 8086 with tape drive and all from my dad’s university times around, but I never acutely used that one.
I had a 20mb hard drive
I had a 1gb hard drive that weighed like 20 kgs, some 40 odd pounds
This is for cold and archival storage right?
I couldn’t imagine seek times on any disk that large. Or rebuild times…yikes.
For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.
Random access times are probably similar to smaller drives but writing the whole drive is going to be slow
up your block size bro 💪 get them plates stacking 128KB+ a write and watch your throughput gains max out 🏋️ all the ladies will be like🙋♀️. Especially if you get those reps sequentially it’s like hitting the juice 💉 for your transfer speeds.
This is my favorite post ever.
Definitely not for either of those. Can get way better density from magnetic tape.
They say they got the increased capacity by increasing storage density, so the head shouldn’t have to move much further to read data.
You’ll get further putting a cache drive in front of your HDD regardless, so it’s vaguely moot.