Hello,
I've just been looking at the CentOS network install. It works great
apart from one thing: anaconda/disk druid refuse to install directly
onto a disk device without a partition table. There appears to be no
way to work around this.
It's been a common theme with the Debian and Ubuntu installers and
with various other tools that they get a bit freaked out by the idea
of a filesystem directly on a block device, but in those cases there
have been workarounds.
So, what is so bad about having a partition table?
Well, assume that the customer has /dev/xvda with a filesystem
directly on it, no partition table, and they want to grow that. The
process is as follows:
1. We enlarge their volume on the host.
2. Some time later the customer shuts down and boot their VPS again
to see the larger /dev/xvda.
3. Customer then does resize2fs /dev/xvda to online grow their root
filesystem into the new space.
You have to admire the simplicity of this; it's pretty hard to get a
shut down / boot wrong, and then you just type "resize2fs" with no
parameters and it's done.
What a shame that's it's unsupported by every major Linux
distribution's installer.
Assume instead that the customer has a partitioned /dev/xvda with
xvda1 as / and xvda2 as a swap partition. Now the process is more
like:
1. We enlarge their volume on the host
2. Some time later the customer shuts down and boot their VPS again
to see the larger /dev/xvda.
3. Customer can't just enlarge xvda1 because the swap partition
xvda2 is in the middle. So, delete that.
4. Edit partition table to delete xvda1 and recreate it again
larger. Note that it fails to tell the OS about the new partition
layout because it's in use.
5. Reboot again to see the larger xvda1 device.
6. Online resize of xvda1 with resize2fs.
7. Recreate swap (or use swapfile).
You can see that the second process is much more complicated than
the first. Not only that but it contains a number of terrifying
steps such as step 4 where the partition for the root filesystem is
deleted. I shudder at the thought of inexperienced sysadmins doing
things like this. I don't like doing it myself.
Disk resizing is quite a common operation at BitFolk. It's a lot
more common than reinstalling. The first process (without partition
table) has been done by customers around 100 times according to my
records, and has never gone wrong that I know of.
So, another possible approach: every VPS is set up with LVM from the
start. xvda partitioned, small / (containing /boot) on xvda1, rest
of disk as xvda2 and used for LVM physical volume, broken up into
small logical volumes for /usr, /var, /home and swap. If more space
is needed it comes as xvdb, xvdc, xvd.., which the customer adds to
their PV and puts in LVM.
Advantages
- The most flexibility for future growth.
- Works in all major distribution installers.
Disadvantages
- I expect very few customers know how to drive LVM, and I don't
really want to be in the business of selling or giving away LVM
training.
- Makes it a bit harder for me to poke around in a customer's VPS
install from outside, which is sometimes requested. e.g. if it
becomes compromised.
Something between the last proposal and the one before it:
Strongly suggest that people who partition their disks (through
choice or because they want to use an installer that gives no choice
like the CentOS one) use LVM, but set up VPSes ourselves directly on
the block devices.
At least then the CentOS installer is usable. In the CentOS case
this could actually be enforced by using a kickstart recipe to
automatically partition the disk in some suggested way, although
this does rather take away some of the point of using an installer
(flexibility).
Anyone have any thoughts? Any tricks I've missed here?
Constraints imposed by Xen: First device (xvda or xvda1) must be
ext3, and must contain /boot.
Cheers,
Andy
--
I have just
recently purchased a Feathercraft Big Kahuna kayak
does it have a heater?
Of
course not. Everyone knows you can't have your kayak and heat it.
-- James Fidell