Hi,
If you do not use or care about IPv6 with regard to your BitFolk VM(s)
you can stop reading this now.
As of this month we have started assigning IPv6 /48 netblocks to new
customers out of BitFolk's own allocation rather than continue giving
out /64s from our colo provider's allocation. Yesterday evening we also
assigned /48s to all existing customer VMs.
New installs (including those done yourself) will get set up with your
/48 from the start but existing VMs do need a few changes to make use of
this new address apace. If you know what you are doing you can just look
at:
https://panel.bitfolk.com/dns/
to find your /48 assignment and start configuring addresses and routes
from within that. They should work.
If that doesn't work or if you need more guidance here is an article
aimed at existing customers:
https://tools.bitfolk.com/wiki/New_/48_assignments,_October_2024
If you still have any questions not covered by the Troubleshooting or
Frequently Asked Questions sections then please do ask, by reply email
or support ticket or on Telegram or IRC.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
I do hope that none of you would be taken in by this sort of thing but
just to warn you there have been a few phishing emails directed at
various BitFolk addresses pretending to be from us, for example this
one today which was directed at the "users" mailing list (and was caught
by spam filters):
https://ibb.co/MZh7p7K
All emails that come from BitFolk should have SPF, DKIM and DMARC. If
you notice any that don't then please let us know.
And of course, we wouldn't send out emails asking for passwords.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
Between about 20:25Z and ~20:50Z today host "Jack" lost all
networking. All of the VMs on it became unreachable.
It seems to have been some sort of kernel driver bug in the
Ethernet module as it was "stuck" not passing traffic but the
interface still showed as up.
The hosts have bonded network interfaces to protect against switch
failure, but as the interface stayed up this was not considered
failed. Also they are in active-backup mode and the currently-active
interface was the one that was stuck, so all traffic was trying to
go that way.
Networking was restored by setting the link down and up again.
Traffic started to flow again, BGP sessions re-established and all
was fine again.
We could look into some sort of link keepalive method on the bonded
interfaces as opposed to just relying on link state, but we have
already decided to move away from bonded networking in favour of
separate BGP sessions on each interface, That is how the next new
servers will be deployed; they will not have network bonding. We
have not yet tackled moving existing servers to this setup.
If we had been in the situation without bonding I think we would
have fared better here: there would have been a short blip while one
BGP session went down, but the other would remain and we'd be left
with some alerting and me scratching my head wondering why an
interface that is up doesn't pass traffic.
I will do some more investigation of this failure mode but in light
of doing away with bonding being the direction we are already going,
I don't think I want to alter how bonding is done on what will soon
be a legacy setup.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
An unauthenticated remote root exploit has been discovered in SSH,
including in versions shipped by Debian stable and newer, and most
other up to date Linux distributions.
https://security-tracker.debian.org/tracker/CVE-2024-6387
Please make sure you have applied the necessary upgrades.
If for some reason you are unable to apply an upgrade, the issue can
be mitigated by setting LoginGraceTime to 0 in /etc/ssh/sshd_config.
This will make it easier for people to tie up all connection slots,
denying access to legitimate connections, but does avoid the remote
root exploit.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
As you may be aware, the next LTS release of Ubuntu is supposed to
be ready in a couple of days.
I've tested a do-release-upgrade from a basic 22.04 cloud image
(what you get when you install 22.04 at BitFolk) and it seemed to go
fine. As usual with this sort of thing though, all the complexity is
in the packages you have installed, so that is no promise that it
would be plain sailing for you.
We will try to get a Xen Shell installer option added for 24.04 as
soon after release as we can, but in the mean time just installing
22.04 and then typing "sudo do-release-upgrade -d" should get you
there.
I *think* it is the case that you need the "-d" as
do-release-upgrade normally doesn't like doing it until the first
point release.
Thanks,
Andy
Ubuntu 24.04 LTS debtest1.vps.bitfolk.space hvc0
debtest1 login: ubuntu
Password:
Welcome to Ubuntu 24.04 LTS (GNU/Linux 6.8.0-31-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
System information as of Tue Apr 23 13:31:00 UTC 2024
System load: 0.07 Memory usage: 6% Processes: 131
Usage of /: 14.6% of 19.20GB Swap usage: 0% Users logged in: 0
Expanded Security Maintenance for Applications is not enabled.
0 updates can be applied immediately.
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
ubuntu@debtest1:~$ uname -a
Linux debtest1.vps.bitfolk.space 6.8.0-31-generic #31-Ubuntu SMP PREEMPT_DYNAMIC Sat
Apr 20 00:40:06 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
Since BitFolk moved datacentre from Telehouse to IP House, and since
some customers had previously actually asked about a renewable
energy statement, I've just updated it.
TL;DR: it's 59% now and they aim for it to be 100% by February 2025.
More detail:
https://tools.bitfolk.com/wiki/Renewable_energy_statement
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
Between approximately 23:07Z and 23:14Z today, due to an error in
some work our colo provider was undertaking, two servers suffered
a total network outage and our remaining servers a partial outage.
Apologies for the disruption. It is not expected to re-occur.
The two worst affected servers were "clockwork" and "macallan".
Our servers have a pair of public network interfaces and are
connected to two separate switches. The error took out one of the
switches but left our ports enabled there, so it was a blackhole for
such traffic.
At the moment we use network bonding in active-backup mode. This
didn't fail over because the link state didn't go down, so the two
servers that had the misconfigured switch as their "active"
interface experienced the longer outage.
We are in the middle of transitioning away from a bonded setup and
having separate interfaces do BGP with our colo provider and use BGP
for such redundancy. We are already doing the BGP part but have yet
to disable the bonding and split the interfaces back out. When that
is complete — which I would hope to have done in a timescale of
weeks, not months — this failure mode won't exist.
Apologies again,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Dear customers,
== TL;DR
- Our colo provider is moving out of Telehouse to another datacentre close by,
and we are going to be moving with them.
- It will mean one short (2h or less) outage for each of your VMs as our
servers are physically moved on multiple dates (to be decided) in
December and January.
- Nothing about your service will change. IPs, specs, prices will all remain
the same. Our colo provider will retain the same transit providers and will
still peer at LINX.
== Background
Since we started in 2006 BitFolk has been hosted with the same colo provider in
the same datacentre: Telehouse London. Telehouse have decided to not renew our
colo provider's contract with them and so they will be moving almost all of
their infrastructure to another datacentre; the nearby IP House.
https://www.ip-house.co.uk/
Given that we have much less than a single rack of infrastructure ourselves,
our options here are to either move with our current colo provider or find
another colo provider. Staying exactly where we are is not an available option.
In light of the good relationship we have had with our colo provider since
2006, we have decided to move with them. This must take place before the middle
of January 2024.
== Planning
At this early stage only broad decisions have been made. The main reason I'm
writing at this time is to give you as much notice as possible of the
relatively short outage you will experience in December or January. As soon as
more detailed plans are made I will communicate them to you.
As soon as the infrastructure is available at IP House we plan to move some of
our servers there - ones with no customer services on them. We will schedule
the physical movements of our servers that have customers VMs on them across
multiple dates in December and January. As IP House is only about 500 metres
from Telehouse, we don't expect an outage of more than about 2 hours for each
server that is moved.
If you can not tolerate an outage like that, please open a support ticket by
emailing support(a)bitfolk.com as soon as possible. We will do our best to
schedule a live migration of your service from hardware in Telehouse to
hardware in IP House, resulting in only a few seconds of unreachability. You
can contact us about that now, or you can wait until the exact date of your
move is communicated to you, but there are a limited number of customers we can
do this for. So please only ask for this if it's really important to you. If it
is, please ask for it as soon as you can to avoid disappointment.
== Answers To Anticipated Questions
=== Will anything about my service change?
No. It will be on the same hardware, with the same IP addresses and
same specification for the same price.
We're aware that we're well overdue for a hardware refresh and we hope to be
tackling that as soon as the move is done. That will result in a higher
specification for the same price.
=== Will the network connectivity change?
No. Our colo provider will retain the same mix of transit providers that it
currently does, and it will still peer at LINX.
=== When exactly will outages affect me?
We have not yet planned exactly when we will move our servers. As soon as we do
we'll contact all affected customers. We need to co-ordinate work with our colo
provider who are also busy planning the movement of all of their infrastructure
and other customers, and installing new infrastructure at IP House.
We expect there to be several dates with one or more servers moving on each
date. All customers on those servers wil experience a short outage, but in
total we would expect only the one outage per VM.
=== Will I be able to change the date/time of my outage?
No. As the whole server that your VM is on will be moving at the specified
time, your options will be to either go with that or seek to have your service
live migrated ahead of that date. Please contact support if you want one or
more of your VMs to be live migrated.
If you have multiple VMs on different servers it is possible that they will be
affected at the same time, i.e. if the two servers that your VMs are on are
both to be relocated in the same maintenance window. If that is undesirable,
again one or all of your VMs will need to be live migrated to servers already
present in IP House.
=== What is live migration?
It's where we snapshot your running VM and its storage, ship the data across to
a new host and resume execution again. It typically results in only a few
seconds of unreachability, and probably TCP connections still stay alive.
More information: https://tools.bitfolk.com/wiki/Suspend_and_restore
== Thanks
Thanks for your custom. Despite the upheaval I am looking forward to
a new chapter in a new datacentre!
Andy Smith
Director
BitFolk Ltd
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
A bug sneaked into the upstream Linux kernel and was included in the
latest Debian stable kernel release. As the point release to Debian
12.3 happened yesterday, if you upgrade to that kernel and boot into it
you will be exposed to a data corruption bug in ext4.
So do not install linux-image-6.1.0-14-amd64 version 6.1.64-1. Wait
for 6.1.66-1 which contains the fix.
https://micronews.debian.org/2023/1702150551.html
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting