With the discussion about RAID 10, that got me back to thinking about
better/alternative system to my current RAID1+LVM+EXT4 setup on our
Linux home server and I'm looking for advice from other members.
Currently we have 13.6 TB of storage (a lot of which are photos by my
semi-professional girlfriend, and videos from our wildlife cam which
produces about 15 - 20GB of videos a day [email me off-list if you want
to want the URLs of my fledgling Youtube hedgehog and bird channels]).
There is some amount of file duplication, for instance where I have
stuck old backups (copied files and folders, not tar/compressed
archives) on there or photos/videos have been copied to different
folders (e.g. to categorise), so filesystems with built-in deduplication
(like I believe BTRFS has) would be nice. However my main priorities
are: maintaining data integrity, ease of administration, and really a
sub-category of that: ease to expand or shrink and reallocate storage as
required (if necessary - quotas are not required, but crashing due to a
full disk is to be avoided).
For years I have been looking at BTRFS, but it's never sounded 100%
production ready to me (although I remember that at least one distro
made it their default fs). Andy's mention of ceph and stratis were
something new to me, but I'm not sure if they are a bit much for a
single server, and I've no experience with ZFS, but I think I read about
some disadvantages that put me off a few years back, but I forget what
they were now.
Anyway, what do/would you use for this sort of scenario/requirement or
what are your experiences with suitable filesystems for my
requirements? Just to be clear - I want to ensure that a single disk
failure is very unlikely to result in data loss. Also, all disks are
currently the spinning disk type, so any features that takes advantage
of SSDs would be wasted.
The something today is me
My SSH is set up to use a different port and login by key only no
passwords. I then added to my firewall a rule that any attempt to connect
on ports 22 or 23 would add the IP to a blacklist with a timeout of one
day. Any further attempt by that IP to connect on any port would reset the
timeout back to 24 hours. It's all logged and I have spent many happy hours
running through the log seeing what other ports these miscreants attempt.
All well and good until today. I have just returned after a year in the far
east, and today still feeling jetlagged, I fired up my desktop computer,
not used in just over a year and clicked on the SSH client icon to connect
so that I could do my regular log checking but it would not connect, I did
not look at the port, but remembered I had changed the keypair a few months
ago. So transferred the key across from my laptop, still no joy. Ran
windows diagnostics, no response from remote host, pings, nothing. Email,
nothing, website nothing.
Panic started to set in, could not even get in through XEN, though that
should have worked.
I used my laptop tethered still to my phone and was in straight away.
Different IP. Yes I had locked myself out for 24 hours.
In the meantime I had sent a panic email to support. Sorry Andy.
It was a matter of minutes once the penny had dropped to get on the laptop
and delete my home IP from the blacklist
Such a silly elementary mistake
So idiot of the day is...
A new BitFolk server that I will put into service soon has 1x SSD
and 1x NVMe instead of 2x SSD. I tried this because the NVMe,
despite being vastly more performant than the SATA SSD, is actually
a fair bit cheaper. On the downside it only has a 3 year warranty
(vs 5) and 26% of the write endurance (5466TBW vs 21024TBW)¹.
So anyway, a pair of very imbalanced devices. I decided to take some
time to play around with RAID configurations to see how Linux MD
handled that. The results surprised me, and I still have many open
As a background, for a long time it's generally been advised that
Linux RAID-10 gives the highest random IO performance. This is
because it can stripe read IO across multiple devices, whereas with
RAID-1, a single process will do IO to a single device.
Linux's non-standard implementation of the RAID-10 algorithm can
also generalise to any amount of devices: conventional RAID-10
requires an even number of devices with a minimum of 4, but Linux
RAID-10 can work with 2 or even an odd number.
More info about that:
As a result I have rarely felt the need to use RAID-1 for 10+ years.
But, I ran these benchmarks and what I found is that RAID-1 is THREE
TIMES FASTER than RAID-10 on a random read workload with these
Here is a full write up:
I can see and replicate the results, and I can tell that it's
because RAID-1 is able to direct the vast majority of reads to the
NVMe, but I don't know why that is or if it is by design.
I also have some other open questions, for example one of my tests
against HDD is clearly wrong as it achieves 256 IOPS, which is
impossible for a 5,400RPM rotational drive.
So if you have any comments, explanations, ideas how my testing
methodology might be wrong, I would be interested in hearing.
¹ I do however monitor the write capacity of BitFolk's SSDs and they
all show 100+ years of expected life, so I am not really bothered
if that drops to 25 years.
https://bitfolk.com/ -- No-nonsense VPS hosting
I am trying to clone my bitfolk Ubuntu 18.04 VPS, using apt-clone.
The issue is that the restore overrides /etc/apt/sources.list - so it
fails, because it could not connect to apt-cacher.lon.bitfolk.com:80.
Any ideas how to restore the packages?
Tel (+351) 910 418 473
I’m trying to use a Docker/Alpine/Strongswan container on my VPS to connect to another site via IPSec (not my site, not my VPN choice). I’m a newbie to IPSec and would appreciate some help with what I think is probably a pretty basic issue.
The IPSec config I have been given (below) is for a site-to-site connection. There is a machine on 10.99.102.92 at the remote site sending packets to 172.30.11.2 on my end and I need the container to be both the VPN endpoint and the destination machine (172.30.11.2). The IPSec connection is established just fine, but I can’t figure out how to properly associate the IP address with the tunnel. I thought this was a simple matter of configuring the IP address on the tunnel device (tunl0) but this fails in a pretty bizarre manner: if I configure the tunnel using ‘ip addr add 172.30.11.2/30 dev tunl0’ then I receive no packets at all. However, if I configure it for any other address in the range, I get the packets for 172.30.11.2.
Can someone tell me how I’m supposed to do this?
Chris Smith <space.dandy(a)icloud.com>