Hello,
On Thu, May 30, 2019 at 11:22:01PM +0100, admins wrote:
Without doing it myself, if I have understood what you
have done
correctly, I would guess it is the imbalance and software raid that is
doing it.
Clearly it is the imbalance (of device performances) because it
won't happen if they both perform the same. The question is why can't
RAID-10 cope with the unbalanced devices? Is it by design? Is it
generally known? Can it be fixed? etc.
With raid 1 (mirror set, I think) being done in
software the total write
time form the writees point of view…
The anomalous result is for random reads. There wasn't an observable
difference between RAID-1 and RAID-10 for writes.
The issue is that the RAID-1 driver is able to parallelise reads
to any member device, but the RAID-10 driver seems to just issue a
fraction to each device.
The best answer so far seems to be just that the RAID-10 driver
wasn't designed to handle devices with different performance
characteristics. A different OS's driver or a hardware RAID
implementation might work (and perform) differently.
I don't know whether it is possible to fix or if anyone is
interested in doing so. Probably RAID in all its forms is on its way
out in favour of zfs, btrfs, ceph, stratis and other forms of
software-defined storage.
Purists would probably be telling me to centralise storage with
something like ceph, using cache tiering to make best use of the
NVMe devices. That is a lot of infrastructure however. Would love to
try it on someone else's dime!
Striping can be faster but only where the writes/reads
are queued
optimally across disks with synchronized spindles. Software raid across
UN-synchronized disks will never achieve the same performance.
I'm unconvinced that this is an issue because if you look at how
many IOs went to each device, it's much closer to 50/50 in the
RAID-10 case, so the driver is just blindly splitting the IOs by the
number of devices present and would be doing that regardless of what
the devices actually were.
This is the province of hardware raid and clever drive
electronics.
Linux software RAID generally outperforms all hardware RAID except
for the case of writes in a situation where the RAID controller has
a big persistent write cache. As pure HBAs (i.e. no RAID
functionality) with persistent write cache are not generally
available, this setup has to be replicated by using a write journal
on a smaller flash device. But as all my devices in this case are
flash, that isn't a factor.
Hardware RAID is so 10 years ago! :)
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting