Hi,
If you are running a memcached server please make sure that it
either doesn't listen on UDP or else that it is properly firewalled.
Publicly available memcached servers can provide a 50,000x traffic
amplification:
<https://blog.cloudflare.com/memcrashed-major-amplification-attacks-from-por…>
As there is no authentication in the memcached protocol, having it
publicly available is generally a misconfiguration anyway.
We will start scanning for and nagging about this soon.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi,
I haven't had chance to personally check this out but apparently the
latest CentOS 7 kernel package doesn't boot under Xen PV:
https://bugs.centos.org/view.php?id=13763
This may be highly relevant to you because an update was just pushed
out for the KPTI feature (to help mitigate Spectre/Meltdown etc in
Linux).
As mentioned in that bug report, there are patches to fix this but
they haven't yet been applied to the main CentOS kernel package.
In the mean time you can use the kernel package from the CentOSPlus
repository which does have this fix and the KPTI one.
https://wiki.centos.org/AdditionalResources/Repositories/CentOSPlus
All of this was researched by a customer having the problem today
and it resolved it for them.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi,
Notifications have just been sent out letting you know the hour long
maintenance window during which the host that your VPS is on will be
rebooted for security patches.
If you have not received the notification please check your spam
folders etc. and if still having no luck please contact
support(a)bitfolk.com.
Apologies for the short notice of this, but now that I feel I have a
reasonable plan covering a large part of the problem space, it is
best to get this done as soon as possible.
I've deliberately left most of the technical detail out of the
reboot notification. The technical details are overwhelming. If you
aren't a particularly technical person then my advice would be:
- Make sure you are running a security-supported release of your
chosen Linux distribution and that you keep it up to date. Between
them and us you should eventually get to safety, just bear in mind
that this is an evolving situation and not many Linux vendors are
willing to push out the latest fixes without extensive testing.
For those who really want them, here are some more technical details.
The newly-deployed hypervisor will:
- support Page Table Isolation, similar to the Linux kernel's KPTI,
to protect against Meltdown.
This feature will protect BitFolk's hypervisor from Meltdown
attacks from all customers.
At the moment all BitFolk VPSes are paravirtual (PV) guests. For
64-bit VPSes, this Xen-level PTI also protects against Meltdown
attacks from within their own kernel or user space. Thus, although
your kernel will report that KPTI is disabled, you will be
protected by Xen's PTI.
It is thought that 32-bit Xen PV guests could still use Meltdown
on themselves; protecting against this requires use of the KPTI
feature inside the Linux kernel. As far as I am aware 32-bit KPTI
is lagging behind the 64-bit version so those with 32-bit VPSes
may wish to consider switching to a 64-bit kernel or upgrading to
a new VPS, if they can't wait.
- be compiled with gcc's new retpoline feature, which prevents CPUs
from doing insecure branch prediction, therefore protecting you
against variant 2 of Spectre.
This is a complete protection for BitFolk's hypervisor against
Spectre variant 2 attacks from guest kernels. It will not protect
guests from attacks from inside their own VPS. For this you will
need to make sure that your own kernel is compiled with retpoline
support, with a compiler that understands the feature.
I am not aware of the situation in other distributions but as I
type this, Debian has already pushed out a version of gcc that has
the retpoline feature to its stable ("stretch") and oldstable
("jessie") releases. A binary kernel package built with it is not
yet available except in unstable, though.
- have working PVH mode support.
This is another way to run Xen virtual machines. It's
faster/simpler than the PV mode that we currently use, it's also
more secure, and it doesn't require use of qemu like HVM does.
IIRC qemu is about 1.2 million lines of code, many times larger
than Xen itself, and I've always been uncomfortable about it.
Converting you all to PVH mode would provide the best protection
against Meltdown and it would actually be more performant than PV
mode, but sadly it requires some fixes in the guest kernel and in the
bootloader that have only just gone in (like, late 4.14 kernel,
early 4.15). We can't convert people to that mode until Linux
distributions are shipping with new enough kernels, but it will be
useful to have it available for early testing.
- A couple of other unrelated security patches which will come out
of embargo later.
What's yet to come:
- Any sort of mitigation for variant 1 of Spectre.
People are still working on it, both in the Linux kernel, in Xen,
and in other software. It's possible that fixes may only come in
the Linux kernel rather than in Xen.
- Updated Intel microcode.
Intel released some updated microcode which features new CPU
instructions to help avoid these problems, and/or reduce the
performance impact of the techniques used. Shortly after release
amid many reports of system instability they withdrew the update
again, and are not currently recommending its use except for
development purposes.
So, we're still waiting for a stable release of that, and at the
moment it's looking like decent fixes can be done in software so
the urgency of a reboot just for this microcode update is low and
I am inclined to roll it in with the next maintenance. That could
change though.
After the maintenance, what you need to think about:
- If you're 32-bit you need to make a decision about Meltdown,
whether you will wait for a 32-bit kernel fix or look at going to
64-bit by some means.
- The retpoline-compiled hypervisor only protects BitFolk from you. To
protect your own VPS against Spectre variant 2 attacks coming from
within itself (like if it was tricked into running something
malicious) you need a kernel that is compiled with retpoline
support.
These are pretty new, but they are out there. Debian pushed out a
version of gcc with retpoline support to its stable ("stretch")
and oldstable ("jessie") releases recently, but as I write this
the only way to get a binary kernel package that was compiled with
it is to use linux-image-amd64 from the unstable repository.
Presumably that is going to filter through to stable etc in due
course. Until then, you could use the package from unstable, or
use the new gcc package to rebuild a kernel package…
- If you are compiling C/C++ software, do you need to be doing it
with a retpoline-aware compiler?
- Look out for Spectre variant 1 fixes - they may be in your
applications and/or kernel too. Although we can expect more
hypervisor changes, after this (and the microcode) I expect the
bulk of it to be in the kernels and applications.
Solutions that are not being pursued:
- Xen have some other mitigation options. They involve running PV
guests inside either a PVH or HVM container. I've investigated
these and they're just too complicated, they remove some useful
functionality, and they still have performance implications about
the same as XPTI.
Longer term I'd like to be moving guests to PVH mode, and perhaps
optionally HVM. That can't happen in a production capacity until a
person can install a stable release of their favourite Linux and
not have to know what PVH mode is, though.
Regarding <https://github.com/speed47/spectre-meltdown-checker>:
- It will always report that you are running under Xen PV and are
vulnerable to Meltdown. It doesn't do any actual proof of concept
exploit, it just detects PV mode and gives up. Once the new
hypervisor has been deployed 64-bit guests will be protected by
its PTI feature. As mentioned, 32-bit guests will still need to
get PTI from their kernel.
- Its reporting of Spectre variant 2 is accurate, so once you're
running a retpoline-compiled kernel it will detect that.
- Its reporting of microcode and new CPU instructions is, as far as I am
aware, accurate. It is my understanding that once there is new
microcode, guests will see it and be able to use these
instructions. This could change though.
That's all I can think of right now. I appreciate this is a lot to
take in. If you have any questions please ask on or off the list.
Once I get a sense of what is unclear I can perhaps make a wiki
page that helps make things clearer.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi,
If you do not use BitFolk's Entropy service and have no interest in
doing so then this email will be of little interest to you can be
safely ignored.
If you haven't heard about the Entropy service before, please see:
https://tools.bitfolk.com/wiki/Entropy
If you *do* use the Entropy service though, I'm interested to know
what software you have that actually uses /dev/random (and not
/dev/urandom).
Some background to this question:
To provide the Entropy service we use hardware entropy generators,
currently exclusively a pair of EntropyKeys manufactured by a UK
company called Simtec Electronics Ltd.
Despite the fact that these were extremely popular little devices
(compared to other fairly niche little gadgets), Simtec always had a
supply problem and then Simtec imploded as a company, so as far as I
know these are now impossible to obtain, the IP is lost forever etc.
Although I have one spare EntropyKey ready to put in service should
one of the two in service ever die (I've not experienced that yet),
that left me slightly worried as to what I'd do if I needed to get
more.
Then I saw the OneRNG kickstarter, and decided to pledge. So now I
have 5 of (the internal USB version of) these:
http://onerng.info/
I've not yet gone any further than verifying that they keep the
entropy pool full on the machine they're plugged into, but that's
good enough for now. Could be a decade before one of my existing
EntropyKeys dies.
I have since heard that this device proved far more popular than its
manufacturer expected (sense a theme?) and they're now extremely
difficult to get hold of because they need to get a new batch made
in China. I've had multiple people contacting me on the basis of a
tweet I did about getting these, asking me to sell them mine (which
I would, but they didn't want internal USB).
The point I'm trying to make here is that the world of hardware
random number generators is not one with reliable supply lines,
unless you want to spend a fortune on some black box.
So when I came across:
http://www.2uo.de/myths-about-urandom/
I was sad that the nerdery that is the Entropy service may be
misguided, but also happy with the possibility that I might never
have to source a hardware RNG again.
Let's just take the argument posited by the article, that all
(Linux) software should just learn to love /dev/urandom¹, as true.
If you don't agree with this claim, you are disagreeing with some
pretty big names in crypto. The Hacker News commentary on the
article may also prove of interest:
https://news.ycombinator.com/item?id=10149019
At the very least, I feel the Entropy article on the BitFolk Wiki
needs an update in light of this. To justify the service's
existence, if nothing else.
Going further, the question becomes, well, what software is there in
existence that forces use of /dev/random with no configuration that
would allow otherwise? Because even if we agree that all software
*should* be using urandom, if some popular software *refuses* to
without recompile, then we're still going to have to provide an
Entropy service, because doing so is easier than running
non-packaged software.
So Entropy service users, what have you got that uses /dev/random?
Cheers,
Andy
¹ A more correct summary of it is probably, "urandom is fine all the
time except for on initial boot when a small amount of entropy
from outside the CSPRNG is desirable."
On shutdown all fairly modern Linuxes save the current entropy
pool to the filesystem and load it up from there on boot, so it's
only essential on first boot.
--
http://bitfolk.com/ -- No-nonsense VPS hosting
Dear All
First of all I like to mention that I do not think that my problem has
got something to do with bitfolks infrastructure since access from my
mobile phone seems to work fine (mobile internet access).
Connected to the internet through init7.net I cannot access
https://bitfolk.com using Chrome or Firefox. I can not 'ssh
me(a)me.vps.bitfolk.com'. 'wget http://hobley.ch' (a webserver of mine at
bitfolk) or 'curl -I http://hobley.ch' does respond if I try several
times. I can reach bitfolk.com and hobley.ch using a tor browser.
Inputs on what could cause this are welcome.
Regards,
Sam
Hello,
As you're probably aware, it turns out that pretty much every CPU
made in the last 10 years is broken, and while this affects almost
all computers, this is going to have a particularly nasty effect on
virtualisation providers such as BitFolk.
The Xen project last night released the first version of their
advisory which is XSA-254:
https://xenbits.xen.org/xsa/advisory-254.html
This is with no embargo, because the original embargo had to be
abandoned by the discoverers of the bugs.
As you can see, unfortunately the Xen project have no resolutions
for any of this available as yet.
There's three different issues here:
1. SP1/Spectre (CVE-2017-5753)
2. SP2/Spectre (CVE-2017-5715)
3. SP3/Meltdown (CVE-2017-5754)
There isn't any known resolution for (1) yet.
Xen are working on mitigations for (2).
It's possible to avoid (3) by going to HVM mode, but that is a huge
change that brings other problems with it. It can also be avoided by
running in PVH mode, but very few guest kernels will be new enough
to support that. Xen are hoping to come up with a way to run
PV-inside-PVH but they're not ready with that yet.
There will likely be other strategies to fix or mitigate these
issues in the coming days.
So I'm afraid there currently is no concrete plan because there is
very little information available yet. All I can tell you is that
there will be a need for short-notice reboots to apply relevant
fixes. I will post again when there is any useful information.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi,
Around 08:06Z today we received an alert regarding host
snaps.bitfolk.com. I found it completely unresponsive over network,
but was still able to connect to its console.
Despite it believing its network interfaces were up and had link, it
was passing no traffic to the colo switches.
I spent about 30 minutes trying to diagnose this and not getting
anywhere, so decided to try rebooting it. As I had console access I
was able to cleanly shut down all VPSes on snaps first.
The shutdown and boot went without incident and things seemed fine
on boot. By about 08:40Z all VPSes that should be running had been
started, and by now Nagios is clear of alerts¹.
I am aware that snaps had an unexplained outage a few months ago, on
28 September. This time the symptoms are not the same, other than
that the problem is unexplained and clears after a reboot.
Clearly there is something wrong there though and it's going to
happen again, so over the next few days we will be moving customers
off of snaps. We will co-ordinate this directly with customers
involved.
Apologies for the disruption,
Andy
BitFolk Ltd
¹ Except for one customer web server which is waiting for a TLS
passphrase to be supplied before it will start.
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Greetings
Is it just me, or has Bitfolk's IPv6 connectivity been a bit unreliably
these last few months?
These are the 5-minuter-or-longer IPv6 downtimes Pingdom has reported
for both my two Bitfolk nodes. The times being in UTC, and rounded to
the nearest full minutes.
2017-12-19: 07:38 to 07:48
2017-12-12: 17:27 to 17:49
2017-11-15: 18:29 to 18:36
2017-10-22: 09:59 to 10:21
2017-09-07: 00:23 to 00:45
2017-09-06: 15:17 to 15:39
// Andreas
Hi,
I've lowered the cost of 10GB of additional data transfer by half,
so the changes are:
----------+-------+------
| Old | New
----------+--------------
Monthly | £0.50 | £0.25
Quarterly | £1.40 | £0.70
Yearly | £5.00 | £2.50
----------+-------+------
If you do not already pay for additional monthly data transfer then
the rest of this email probably won't be of interest.
Those paying by Direct Debit will just be charged less. Those paying
by PayPal have already had their subscription details altered and
PayPal should have told you about that.
Those paying by standing order will need to take care to adjust their
regular payment otherwise you will be paying too much. It will build
up as credit on your account. You can see the cost of your current
spec at:
https://panel.bitfolk.com/account/config/
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce