Dear customers,
== TL;DR
- Our colo provider is moving out of Telehouse to another datacentre close by,
and we are going to be moving with them.
- It will mean one short (2h or less) outage for each of your VMs as our
servers are physically moved on multiple dates (to be decided) in
December and January.
- Nothing about your service will change. IPs, specs, prices will all remain
the same. Our colo provider will retain the same transit providers and will
still peer at LINX.
== Background
Since we started in 2006 BitFolk has been hosted with the same colo provider in
the same datacentre: Telehouse London. Telehouse have decided to not renew our
colo provider's contract with them and so they will be moving almost all of
their infrastructure to another datacentre; the nearby IP House.
https://www.ip-house.co.uk/
Given that we have much less than a single rack of infrastructure ourselves,
our options here are to either move with our current colo provider or find
another colo provider. Staying exactly where we are is not an available option.
In light of the good relationship we have had with our colo provider since
2006, we have decided to move with them. This must take place before the middle
of January 2024.
== Planning
At this early stage only broad decisions have been made. The main reason I'm
writing at this time is to give you as much notice as possible of the
relatively short outage you will experience in December or January. As soon as
more detailed plans are made I will communicate them to you.
As soon as the infrastructure is available at IP House we plan to move some of
our servers there - ones with no customer services on them. We will schedule
the physical movements of our servers that have customers VMs on them across
multiple dates in December and January. As IP House is only about 500 metres
from Telehouse, we don't expect an outage of more than about 2 hours for each
server that is moved.
If you can not tolerate an outage like that, please open a support ticket by
emailing support(a)bitfolk.com as soon as possible. We will do our best to
schedule a live migration of your service from hardware in Telehouse to
hardware in IP House, resulting in only a few seconds of unreachability. You
can contact us about that now, or you can wait until the exact date of your
move is communicated to you, but there are a limited number of customers we can
do this for. So please only ask for this if it's really important to you. If it
is, please ask for it as soon as you can to avoid disappointment.
== Answers To Anticipated Questions
=== Will anything about my service change?
No. It will be on the same hardware, with the same IP addresses and
same specification for the same price.
We're aware that we're well overdue for a hardware refresh and we hope to be
tackling that as soon as the move is done. That will result in a higher
specification for the same price.
=== Will the network connectivity change?
No. Our colo provider will retain the same mix of transit providers that it
currently does, and it will still peer at LINX.
=== When exactly will outages affect me?
We have not yet planned exactly when we will move our servers. As soon as we do
we'll contact all affected customers. We need to co-ordinate work with our colo
provider who are also busy planning the movement of all of their infrastructure
and other customers, and installing new infrastructure at IP House.
We expect there to be several dates with one or more servers moving on each
date. All customers on those servers wil experience a short outage, but in
total we would expect only the one outage per VM.
=== Will I be able to change the date/time of my outage?
No. As the whole server that your VM is on will be moving at the specified
time, your options will be to either go with that or seek to have your service
live migrated ahead of that date. Please contact support if you want one or
more of your VMs to be live migrated.
If you have multiple VMs on different servers it is possible that they will be
affected at the same time, i.e. if the two servers that your VMs are on are
both to be relocated in the same maintenance window. If that is undesirable,
again one or all of your VMs will need to be live migrated to servers already
present in IP House.
=== What is live migration?
It's where we snapshot your running VM and its storage, ship the data across to
a new host and resume execution again. It typically results in only a few
seconds of unreachability, and probably TCP connections still stay alive.
More information: https://tools.bitfolk.com/wiki/Suspend_and_restore
== Thanks
Thanks for your custom. Despite the upheaval I am looking forward to
a new chapter in a new datacentre!
Andy Smith
Director
BitFolk Ltd
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hello all,
Would any of you know if the following scenario is "doable"?
We run an old Exchange 2010 infrastructure at my work, and there is no way
they are going to spring for newer: getting them to go from 2003 to 2010
was an ordeal...
Could I set up an Ubuntu Postfix "relay" server between Exchange and the
Internet, that also permits one particular mailbox to be accessible from a
Dovecot install on the same server (as well as relaying the mail for that
mailbox to Exchange)?
Yes/no and pointers most welcomed.
Kind regards
Murray Crane
Hello,
I was reading about this incident of alleged lawful intercept used
on Hetzner and Linode in Germany in order to successfully MitM
TLS-encrypted traffic for a period of months:
https://notes.valdikss.org.ru/jabber.ru-mitm/
The link at the bottom on some ideas to detect and mitigate is also
worth a read:
https://www.devever.net/~hl/xmpp-incident
I am still left wondering why the attacker did not use a block
device and/or memory snapshot of the Linode VM in order to extract
the real TLS key material and avoid having to issue new ones, which
appeared in CT logs.
At the moment my best guess is that perhaps the filesystem was
protected by LUKS and the skills to extract key material from a
memory dump, while existing, were in short supply. Meanwhile, the
procedure to MitM network traffic through their own hardware on
Hetzner and Linode is probably very well documented and tested, so
maybe could be done straight away, and it was perhaps considered
expedient to just risk the new certs being noticed.
DNSSEC+CAA start to seem like very good ideas.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
After a recent update of the cloud-init package on Ubuntu 22.04
something appears to be going wrong and cloud-init is being run at
every boot - WHICH WILL LEAD TO YOUR NETWORKING CONFIG BEING
DELETED.
In addition, the password of the "ubuntu" user is locked and the SSH
host keys are regenerated.
I do not yet know whether this is the result of some misuse of
cloud-init on our part, or some bug in cloud-init. The outcome is so
bad that I have to warn you about this as soon as possible, before
I've fully understood the issue.
It is safe — and at this stage recommended — to simply remove the
cloud-init package which serves no purpose at BitFolk after first
boot.
$ sudo apt remove cloud-init
If it is too late for you and you already did reboot and are now
wondering why your VM has no network and is trying to DHCP for one,
here's how to fix things.
1. Connect to your Xen Shell with
ssh accountname(a)accountname.console.bitfolk.com
More info: https://tools.bitfolk.com/wiki/Xen_Shell
1. Work out how you're going to get root access from a console log
in prompt. If you have a user other than the initial "ubuntu" one,
you'll use that. If you don't, you'll need to reset the password for
"ubuntu" as it has now been locked.
If you have a login already, use "console" command and log in to
your VM at its console.
If you need to reset the "ubuntu" password:
a) Make sure VM is shut down
xen shell> shutdown
b) Follow these instructions substituting "ubuntu" for "root":
https://tools.bitfolk.com/wiki/Resetting_root_password
then "boot" your VM again and log in as "ubuntu".
2. At this point you're logged in as "ubuntu" on a VM with no
network.
Put /etc/netplan/50-cloud-init.yaml back to how it was. Here is
an example file:
https://tools.bitfolk.com/wiki/Ubuntu#Migrate_to_netplan
The "gateway" statements are deprecated but will still work.
Make sure to "chmod go= /etc/netplan/50-cloud-init.yaml" so it
has correct permissions
$ sudo netplan generate
$ sudo netplan apply
Your networking should now work again
3. sudo apt remove cloud-init
It will now be safe to reboot in future.
As I say I am still looking into where the problem lies here and the
best way to fix it.
The example netplan config linked above has some deprecated
statements in it which I will also fix (if it doesn't have
"gateway4" etc in it any more then I did already by the time you
read this), but it does (still) work.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hello,
I'm thinking about running a small web & email server at home. We have a fixed IP address (only IPv4, we don't have an IPv6 address).
I'm wondering if anyone knows of any issues we might come across. For instance, will people like spamhaus recognise it as a residential IP and blacklist it because of that?
Robin
Hi,
This email only of interest to users of Ubuntu 22.04 and beyond.
I'm just dealing with a support ticket where an Ubuntu 22.04 VM was
rebooted and lost its networking configuration.
For Ubuntu 22.04, initial networking (and other) configuration is
baked into a "seed image" which is mounted as /dev/xvdz at first
boot. cloud-init then uses that information to create a netplan
config file in /etc/netplan/. This VM was found to no longer have
that config, but instead only have some sort of default config that
tried to use DHCP.
I don't yet know how this happened. I would like to.
If this happens to you, you will need to log in by the Xen Shell
console and configure your networking again. Here is an example of a
netplan config for BitFolk:
https://tools.bitfolk.com/wiki/Ubuntu#Migrate_to_netplan
After generating and applying that (with your correct details) I
would expect things to be fine.
If this does happen to you I would very much like to know and also
if you have any insight into how your working configuration got
deleted.
I've only sent this to the users list (not announce@) at this stage
because I have no idea what caused it or if it's likely to happen
again to anyone else.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
Apologies for exposing my ignorance in public like this, but can somebody tell me how I'd know if my Debian Bookworm system has been patched to ensure it's no longer vulnerable to the "Looney Tunables" privilege escalation (https://www.debian.org/security/2023/dsa-5514)?
The fix is apparently in the most recent glibc source package. I don't seem to have that glibc package installed (and it's a source package, not a binary?), but I read that stock installs of Debian (and most linuxes) are vulnerable. Which actual binary packages need to be updated to fix the vulnerability in the dynamic loader, and how does this relate to the source package?
Cheers,
jmi
--
Jamie MacIsaac
jamie(a)macisa.ac