Hi,
If there's any of you who are actively using an IPv6 /56 that we've
routed to you, and you don't overly mind rebooting your VM to test
something (maybe a couple of times) could you get in touch off-list
please?
Our records say there's only 19 of you so it's a bit of a long shot.
I'd rather not try this with someone who doesn't normally make use of a
/56 as you may not immediately spot the difference between working
and not. Also I've already tested as many setups as I can think of so
I'd only be suggesting things I already tested!
I can add a little bit of free service for the inconvenience.
(Trying to set reply-to on this but Mailman will probably override it so
be careful not to reply on-list if that wasn't your intention.)
Thanks!
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
I do hope that none of you would be taken in by this sort of thing but
just to warn you there have been a few phishing emails directed at
various BitFolk addresses pretending to be from us, for example this
one today which was directed at the "users" mailing list (and was caught
by spam filters):
https://ibb.co/MZh7p7K
All emails that come from BitFolk should have SPF, DKIM and DMARC. If
you notice any that don't then please let us know.
And of course, we wouldn't send out emails asking for passwords.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
## TL;DR:
Migrations between servers coming in the near future for the purposes of
a hardware refresh, with an upgrade to VM specs, a bit like in 2015–2016.
## Hardware refresh
In the coming weeks we hope to get started with a long overdue hardware
refresh. This will include some sort of upgrade to the base VM
specification, the exact details of which haven't yet been decided.
The last time we did this was in 2015–2016 and I expect things to go in
much the same way, although slightly less complex as we also did some
changes to VM disk layout that time. Here's more info on how that went:
https://tools.bitfolk.com/wiki/Hardware_refresh,_2015-2016
Three of the existing servers have some life left in them but they do
not currently have 25GE network interfaces, so it's going to be a
priority to clear customers off of those first so they can be taken out
of service for that to happen. Those servers are "elephant",
"limoncello" and "talisker".
I anticipate sending a direct email to the main tech contact for each VM
on the candidate server asking if they would like to have their VM
migrated to new hardware and get the upgraded specs at the same time.
Past experience has shown that the majority of people won't be
interested in the upgrade and would rather not have to do anything, so
this communication is going to have to say something like, "if we don't
hear back from you to schedule this within 15 days, we will schedule it
ourselves for a date no sooner than 30 days from now".
Since eventually all servers do require at least an OS upgrade even if
they are not being decommissioned, it is I am afraid inevitable that all
customer services will have to be migrated between servers. This is
going to take many months. We will try to be as flexible as possible
with people who need that.
For those who have never experienced their VM being migrated between
servers, it is not a big deal. We shut it down and boot it a few
seconds later on its new server. At your option¹ we can also do a live
migration which works most of the time² and results in no actual
shutdown. Nothing about the VM changes, except that its specs will be
upgraded.
## IPv6 changes
We also need to change how we do IPv6 and there's no reason why we can't
start doing that at the same time so unless things are getting
overwhelming you might see something soon about that also.
Basically we need to start assigning customer VMs IPv6 address space out
of BitFolk's 2a0a:1100::/29 rather than the /48 of Jump's that we've been
using since 2005.
I expect to be reserving a /48 for each VM but as the majority of people
use either zero or one IPv6 address, only one IPv6 address from the new
assignment will initially be routed to your VM. For those wanting more we
will add a feature to the Panel and/or Xen Shell to expand the routing to
a /64 or the full /48 (and maybe something in between there also). The
goal is for that to not require a support ticket.
This sort of thing is desirable to prevent neighbour table exhaustion
attacks, where someone cycles through many addresses in a /64 (or larger).
I do not expect to take away the legacy assignments from out of the
2001:ba8:1f1::/48 provided by Jump from existing VMs, but after we begin
rolling this out new VPS orders and new self-installs will not have the
legacy IPv6 space. At some point in the fairly distant future a retirement
date will be announce for that range.
## Feedback?
So that's what's coming up. If you have any feedback or questions we're
keen to hear them.
There is no need to volunteer now to be first for upgrades or anything as
you will be contacted and that will be your chance to do that!
Thanks,
Andy
¹ https://panel.bitfolk.com/account/config/#prefs
² There have been some rare problems with this in the past which resulted
in disk corruption but in the case of migration between servers we do
always have a copy of your disk on the source server, so it's pretty
safe. All of BitFolk's own VMs live migrate.
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
Between about 20:25Z and ~20:50Z today host "Jack" lost all
networking. All of the VMs on it became unreachable.
It seems to have been some sort of kernel driver bug in the
Ethernet module as it was "stuck" not passing traffic but the
interface still showed as up.
The hosts have bonded network interfaces to protect against switch
failure, but as the interface stayed up this was not considered
failed. Also they are in active-backup mode and the currently-active
interface was the one that was stuck, so all traffic was trying to
go that way.
Networking was restored by setting the link down and up again.
Traffic started to flow again, BGP sessions re-established and all
was fine again.
We could look into some sort of link keepalive method on the bonded
interfaces as opposed to just relying on link state, but we have
already decided to move away from bonded networking in favour of
separate BGP sessions on each interface, That is how the next new
servers will be deployed; they will not have network bonding. We
have not yet tackled moving existing servers to this setup.
If we had been in the situation without bonding I think we would
have fared better here: there would have been a short blip while one
BGP session went down, but the other would remain and we'd be left
with some alerting and me scratching my head wondering why an
interface that is up doesn't pass traffic.
I will do some more investigation of this failure mode but in light
of doing away with bonding being the direction we are already going,
I don't think I want to alter how bonding is done on what will soon
be a legacy setup.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
This should have limited effect on anyone here as if you are still
running 32-bit Debian it is long out of support so you won't be
getting upgrades anyway.
No more 32-bit x86 kernels in Debian:
https://lists.debian.org/debian-kernel/2024/09/msg00138.html
What it does mean is that there are even fewer people who are going
to find and fix any bug in 32-bit x86 Linux.
In practical terms for Debian it means that the next release, Debian
13 (trixie) will be the first one to not have an i686 kernel or
installer, but as I say, for BitFolk purposes amd64 has been
required since Debian 11.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting