On Fri, May 31, 2019 at 09:49:40PM +0000, Andy Smith wrote:
Hello,
On Fri, May 31, 2019 at 06:07:45PM +0000, Hugo Mills wrote:
I'm now up to 11TB of data on a 13TB
RAID-1, and it's been fine for
years, even over power failures and several disk failures.
My experiences with btrfs and online disk replacement have not been
great. No lost data (partly thanks to you talking me through one
recovery a long time ago - thanks!), but have not yet managed a
single instance without having to reboot at least once.
The best thing I have to say about btrfs is that it's enabled me to
recycle the large number of variously-sized HDDs that BitFolk has
gone through as I can just slot them in without worrying about array
topologies and such.
For backups of a mostly-append data store such
as yours, the
cheapest option in the ~4 TB -- ~150 TB range is BD-R. Over 150 TB or
so, it's cheaper to use LTO tapes. I'll need to update my spreadsheet
if you want more precise figures on that.
I didn't know anything about BD-R until just now but it looks like
they max out at 100GB. So do you need to sit there changing discs
for the initial backup? I guess an incremental will fit on one disc
and then it's just a matter of how long you go before doing a full
checkpoint.
Don't use anything other than the 25 GB disks. The higher capacity
ones are up to four times the cost per GB, and have been for some
years. It's not cost-effective, and I don't see that changing any time
soon.
And yes, you'd have to keep feeding disks in for the initial
backup. After that, I have a processing pipeline where all the
incoming files end up (eventually) in one directory, and a cron script
which runs a bin-packing algorithm on it, filling a temp directory
with hardlinks if it's found 25 GB ± 10 MB of data to write to
BD-R. It works very well for my append-only workload. It would work
much less well for a mostly-modify workload.
I started out writing CD-Rs about 15 years ago, migrated to DVD+R,
and then to BD-R. I'm considering LTO-6 as the next migration, despite
the poor economics w.r.t. BR-R at my kind of scale.
I am currently backing up about 530GiB to a rented
Hetzner server
that costs me about €40/month so not cheap, but fairly convenient.
Would be much cheaper to build my own backup host out of cast off
hardware and only ongoing cost being the electricity, but I need it
outside my home for disaster recovery purposes.
I'm storing more than an order of magnitude more than that. It gets
really expensive.
I also periodically send a full copy to Amazon Glacier
for emergency
purposes. This costs about $3/month to keep there, but if I were to
ever need to restore it, it would cost about $50 one off to download
it all. So I hope to never have to do that.
Since the end of the HP Microserver cashback offer I wonder what the
next iteration of my home fileserver would look like. Right now it
is one of the Microservers plus an 8 bay eSATA disk shelf.
I think those disk shelves are too expensive for what should be
quite a dumb device. Maybe it would be better to just buy one tower
case to fit the drives into. The nice thing about the Microservers
though is that you have a known good level of engineering and
design, whereas some cheap thing you buy and put together yourself I
feel you'd risk encountering all manner of niggling problems.
My requirements for such a machine would be:
- amd64 architecture
- Not immensely power hungry. Doesn't have to be crippled either,
but a CPU that doesn't need active cooling would probably be good.
- At least 10 hot swap 3.5" drive bays connected by SATA, or 8 of
them if I could install a pair of internal flash devices to hold
the OS.
These latter two requirements seem to be impossible to achieve in
the current market, unless you want to go for external port
multipliers. It seems that nobody builds machines with small low-power
chips and lots of SATA ports.
You can get lots of SATA ports on a socketed motherboard, but then
it's really hard to get hold of a low-power CPU that will fit. Or you
can get a low-power CPU soldered onto a board with a couple of SATA
sockets. It's really irritating.
So if anyone has built something like that recently on
the cheap I'd
be interested to hear what you went for.
If the cashback offer was still there I'd be tempted to buy 2 or 3
Microservers and play with Ceph…
Ceph needs lots of IOPS, and when I adminned a Ceph setup a few
years ago, it sucked *massively* for repeated small file random
access. It's probably OK for streaming big files with sequential
reads.
We were running Ceph and CephFS on three fairly hefty Xeons, with
six HDDs in each, plus another data machine with six large SSDs, and
another two for the metadata servers, with SSDs in those. Oh, and I
think we had a master for the cluster, too. The machines had two
networks -- one GbE network going to the users, and a 40 Gb Infiniband
interconnect reserved for the cluster. That was something of a minimal
configuration in terms of the number of machines.
Total data holdings were something like 30 TB. The metadata servers
were pretty much always running CPU-hot. Using it for home directories
on desktop machines was basically unusable -- it would take 15 minutes
to log in on a desktop session, and anything up to half an hour to
open a Thunar window on someone's home directory. We ended up
migrating the home dirs to a NetApp and the big data store for HPC to
a GlusterFS. It took over a month to copy the 30 TB of data over a GbE
network.
Hugo.
--
Hugo Mills | I am but mad north-north-west: when the wind is
hugo@... carfax.org.uk | southerly, I know a hawk from a handsaw.
http://carfax.org.uk/ |
PGP: E2AB1DE4 | Hamlet, Prince of Denmark