Hello,
With the news that RHEL will be free for up to 16 servers:
https://arstechnica.com/gadgets/2021/01/centos-is-gone-but-rhel-is-now-free…
is anyone willing to spend some time trying to install it in a
chroot then getting it to boot under Xen PVH mode?
Assuming it is possible to access the RPM files, I assume the
process will be very similar to the current CentOS 8 process:
https://tools.bitfolk.com/wiki/Installing_CentOS_8
which can be summarised as:
1. Prepare a chroot
2. Install CentOS base system into it
3. Enable EPEL repository to switch it to kernel-ml package so that
it works under Xen.
I will provide the VM account to do it, and some amount of account
credit once it is done and documented in the wiki.
I have no idea what hoops one must jump through to get a Red Hat
developer account nor if it is possible to download the RPMs like
that once you do. If I did I'd do this myself!
Please Contact me off-list if interested in helping out.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hello,
I don't think there's many¹ of you running Fedora, but as of
kernel-core-5.9.8-100.fc32 they switched their kernel compression
method from gzip to zstd.
Similarly to Ubuntu — which switched to lz4 from 19.10 onwards —
this leaves it not bootable in Xen PV mode as the PV boot loader
doesn't understand zstd (or lz4) compression.
This may not be obvious to you as this has happened in the middle of
a release and I don't think it is announced anywhere that the
compression method was changed. Nor would such an announcement
necessarily prepare you for the sudden boot failure in any case.
The simplest way forward is to switch to PVH mode:
https://tools.bitfolk.com/wiki/PVH#Fedora
If for some reason you don't want to switch to PVH mode², you will
need to get a kernel that is not compressed with zstd. Possibly
there are other kernels available for Fedora, or you could use
extract-vmlinux to decompress the packaged one.
Cheers,
Andy
¹ We don't have an installer for it, but it can be installed from
the Rescue VM, and at least two of you did that.
² I don't know of any reason not to use PVH mode.
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
A question for those who use the backup service¹:
Currently we mark the backup run as successful if rsync exits with a
success value.
There are only two exceptions: exit code 2 and exit code 24. Both of
those relate to files which rsync thought existed but ended up not
existing when it came to actually transfer them. I consider those
transient issues related to backing up a filesystem that is in use,
and not a reason to consider the whole backup run as failed.
So what about files that our rsync process cannot read? At the
moment that produces an exit code of 23, and is considered a failed
run, even though everything else got transferred. This eventually
causes a "backup age" monitoring alert because the last successful
backup run was too long ago. Even though everything else is actually
being backed up.
If we consider error code 23 as okay then a backup run that failed
to transfer one or more files due to permissions is still considered
a success and the alert goes away. But you possibly never find out
about what happened because you don't get to see the logs, you would
have to check every file in your backups to be sure they're there.
If we continue to consider error code 23 as a failure of the whole
run then you will have to either allow our rsync to read the files
concerned or else put up with perpetual alerts - which you could
silence but then would never tell you about other problems.
What should we do?
Note that most of you allow our rsync to run as root so it can
generally read everything and you'll never experience this. But in
theory you could if you found some way to deny root permission to
read something.
I would ask for opinions only from those who make use of the backup
service, as root or not.
Cheers,
Andy
¹ https://tools.bitfolk.com/wiki/Backups
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
As of about 0000Z we started receiving alerts of packet loss and
began investigation. It was found to be an internal issue between hosts
"clockwork" and "limoncello" only. That is, everything on both hosts
was reachable from outside our network and also from inside as long
as it wasn't between those two hosts.
As there is a monitoring node on "limoncello", a number of alerts
were sent out regarding customer services on "clockwork" that it
considered to be down, but they weren't actually down - unless you
happened to be hosted on "limoncello", anyway, and vice versa.
I tracked the issue to one of the two bonded switch ports for
"clockwork"; bringing that interface down and up again appears to
have cleared it. That happened at about 0045Z.
If the problem reoccurs we can down the interface and have it run on
one interface until the port or switch can be changed. If the
problem is actually in the NIC of the server itself things will be
more tricky, but we'll cross that bridge if we come to it.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi,
I’m trying and failing to get overlay network traffic working between Docker containers on different VPS hosts. The issue seems to be that neither host is sending VXLAN data on port 4789. I’ve been getting help on Docker’s IRC channel and the suggestion there is that issues have been seen before with this and interaction with other virtualisation technologies (VMware, for example) which also use VXLAN. Does anyone know if there are issues using swarm/overlay networking between Xen VPS hosts?
Regards,
Chris
—
Chris Smith <space.dandy(a)icloud.com>
Hi,
At around 00:15Z we started receiving alerts regarding some servers
on host "elephant". Looking at the machine's console it was
reporting errors with its SAS controller, and was generally
unresponsive to anything requiring block IO, so I had no choice but
to power cycle it.
On boot I couldn't find any issue with its SAS controller, and it
was able to find all its storage devices and seemingly boot
normally. The last few customer VPSes have finished booting as I
type this.
I will keep an eye on things for the next few hours and let you know
about further actions. Please accept my apologies for the
disruption.
This is unrelated to the problems with "elephant" last month which
were tracked down to a kernel bug.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi,
A few days ago someone asked if we would match a 5% discount that
another hosting company offered to developers of significant open
source projects. After thinking about it for a bit I decided we
would.
So, if you are a registered Debian/Ubuntu Developer, Fedora
maintainer, BSD committer etc etc please feel free to email
support(a)bitfolk.com and ask if you qualify. When you do, please
provide some sort of link to your work so we can verify it.
More information here:
https://tools.bitfolk.com/wiki/Developer_discount
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hello,
This email doesn't discuss anything that you need to change (or in
fact CAN change, at the moment), it's mostly a status update and
fishing for feedback so can be safely ignored.
As you may know, all BitFolk VPSes currently run under Xen PV
(paravirtual) mode. We would like to switch to PVH mode instead.
Here's some notes about our investigations of PVH mode:
https://tools.bitfolk.com/wiki/PVH
On the one hand we can get started right away: BitFolk's hypervisors
support it, the newer stable releases of Debian and Ubuntu support
it, the mainline kernel supports it, the rescue VM will work, etc
etc. All new customers can be run in PVH mode as long as they don't
choose an older release.
On the other hand, it's not quite that simple as there's still a
huge amount of existing customers whose kernels won't support it.
I think it probably makes sense to immediately switch the Rescue VM
to PVH mode, and make all installers also use it where the chosen
install is known to support it. i.e. make it so that if you go and
install Debian buster or testing or Ubuntu 20.04 right now, it
silently switches you to PVH mode, because it's known to work.
At the same time, we can add a Xen Shell command to toggle you
between PV and PVH so you can try it out. If it doesn't work you
could switch back, try again later at your own leisure etc. We don't
know what Linux distribution you are running and can't tell without
snooping on your console, so we can't make a good enough guess on
your behalf.
I don't really want to make a big thing of this because it's too
complicated, so I'm thinking of hiding it away unless you need it.
New customers / installs shouldn't have to think about it. It's only
a concern for those of you with older, possibly 32-bit installs. For
that reason I don't think I want to expose any of these details on
the web panel or allow switching of the guest type there.
On the subject of 32-bit PV support the deadlines are fairly laid
back as — assuming you have no need of running a 5.9+ kernel —
32-bit PV booting could possibly¹ continue to work until 2023, which
is the security support EOL for the last version of Xen that will
support it.
We're currently using Xen 4.10 and 4.12, and with the release of
Debian 11 (bullseye) I will probably be looking to rolling upgrade
to that with Xen 4.14.
Any questions or thoughts on this?
Cheers,
Andy
¹ It is conceivable that some new CPU side channel attack lands and
it's found that there is no way to stop a PV guest from snooping
on the memory of the hypervisor or other PV guests. In that case
there will be a sudden scramble to switch everyone.
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hello,
Perhaps of interest to those of you still on 32-bit installs¹,
Debian now has a "crossgrader" package that has just had its first
version uploaded to the "unstable" suite:
https://salsa.debian.org/crossgrading-team/debian-crossgrading/
I have used this manual process:
https://wiki.debian.org/CrossGrading
many times to go from i386 to amd64, but it is unsupported and scary
and depending on the exact packages you have installed can be very
tricky. It will ask you to approve actions that may destroy your
system.
I haven't yet had chance to try "crossgrader" myself yet, but I will
try it next time I have to do this. I would be interested in reading
your experiences of using it.
I gather that it would be wise to fully upgrade to the latest stable
release (buster) on i386 before trying to crossgrade to amd64,
either by this method or manually.
For those keen to switch to 64-bit, much of the benefits can be
obtained without most of the risk by only changing your kernel to
the amd64 architecture. After following the wiki article above to
the end of the "Install a kernel that supports both architectures in
userland" section, you would:
1. Connect to Xen Shell and use the "console" command if not already
in it.
2. Halt your VPS.
3. Use the Xen Shell "arch" command to switch to x86-64 bootloader.
4. Use the Xen Shell "boot" command to boot it.
5. When your own grub menu appears, select the amd64 kernel that is
listed, not the i686 one.
Provided everything seems to work okay you can then remove the i686
kernel packages, and the system will keep the -amd64 kernel packages
up to date. Your VPS is then a 64-bit one running a 64-bit kernel
but with almost entirely 32-bit userland.
Changing over the 32-bit userland by either the crossgrader package
or manually is where most of the complexity lies.
For context as to why you might want to switch away from 32-bit:
- The next major Xen release will remove 32-bit PV support. We'll
switch to PVH mode before then to allow remaining 32-bit guests to
still run.
- At the recent Xen Developer summit it was stated that "anyone
still running 32-bit PV is setting fire to 30% of their CPU".
- All manner of 32-bit-specific fixes, including security, are being
delayed and overlooked in the upstream Linux kernel, so switching
away from running an i686 kernel would be a good idea.
Cheers,
Andy
¹ 41.1% of customer VMs, according to our database.
--
https://bitfolk.com/ -- No-nonsense VPS hosting