Hi,
I've added an Ubuntu 18.04 LTS installer to our Xen Shell, so it's
now available for self-install. More info about self-install:
<https://tools.bitfolk.com/wiki/Using_the_self-serve_net_installer>
So, the command is "install ubuntu_bionic". If you don't see it,
make sure you are running version v1.48bitfolk46 of the Xen Shell as
the Xen Shell stays running if you connected to it before.
Please note:
- Obviously this is still pre-release for 18.04. I have only tested
it so far as installing it, booting it and connecting to it with
SSH. I would be interested to know of your progress if you use it.
- If you already are running Ubuntu you could just
do-release-upgrade into this as normal.
- As ever, if you'd like to perform a self-install but need to keep
your existing VPS running for a while, we can offer a new account
free for 2 weeks for you to perform your migration:
<https://tools.bitfolk.com/wiki/Migrating_to_a_new_VPS>
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
The level of SSH scanning is getting ridiculous.
Here's some stats on the number of Fail2Ban bans across all Xen
Shell hosts in the last 7 days:
# each ∎ represents a count of 46. total 4653
59.63.166.104 [ 2037] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ (43.78%)
58.218.198.142 [ 998] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ (21.45%)
59.63.166.105 [ 641] ∎∎∎∎∎∎∎∎∎∎∎∎∎ (13.78%)
58.218.198.146 [ 352] ∎∎∎∎∎∎∎ (7.57%)
58.218.198.161 [ 272] ∎∎∎∎∎ (5.85%)
59.63.188.36 [ 145] ∎∎∎ (3.12%)
192.99.138.37 [ 61] ∎ (1.31%)
103.99.0.188 [ 40] (0.86%)
218.65.30.40 [ 15] (0.32%)
202.104.147.26 [ 13] (0.28%)
42.7.26.15 [ 8] (0.17%)
163.172.229.252 [ 8] (0.17%)
42.7.26.91 [ 8] (0.17%)
198.98.57.188 [ 8] (0.17%)
58.242.83.26 [ 8] (0.17%)
58.242.83.27 [ 8] (0.17%)
182.100.67.82 [ 6] (0.13%)
217.99.228.158 [ 5] (0.11%)
218.65.30.25 [ 4] (0.09%)
117.50.14.83 [ 4] (0.09%)
46.148.21.32 [ 4] (0.09%)
178.62.213.66 [ 3] (0.06%)
116.99.255.111 [ 3] (0.06%)
165.124.176.146 [ 1] (0.02%)
101.226.196.136 [ 1] (0.02%)
First three octets only:
# each ∎ represents a count of 61. total 4653
59.63.166.0/24 [ 2678] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ (57.55%)
58.218.198.0/24 [ 1622] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ (34.86%)
59.63.188.0/24 [ 145] ∎∎ (3.12%)
192.99.138.0/24 [ 61] ∎ (1.31%)
103.99.0.0/24 [ 40] (0.86%)
218.65.30.0/24 [ 19] (0.41%)
42.7.26.0/24 [ 16] (0.34%)
58.242.83.0/24 [ 16] (0.34%)
202.104.147.0/24 [ 13] (0.28%)
163.172.229.0/24 [ 8] (0.17%)
198.98.57.0/24 [ 8] (0.17%)
182.100.67.0/24 [ 6] (0.13%)
217.99.228.0/24 [ 5] (0.11%)
46.148.21.0/24 [ 4] (0.09%)
117.50.14.0/24 [ 4] (0.09%)
116.99.255.0/24 [ 3] (0.06%)
178.62.213.0/24 [ 3] (0.06%)
165.124.176.0/24 [ 1] (0.02%)
101.226.196.0/24 [ 1] (0.02%)
That is with Fail2Ban adding a 10 minute ban after 10 login
failures. If there was no ban this would be 100s of thousands of
login attempts instead of 4,653 bans.
Yes I can send an abuse report to Chinanet's "Jiangxi telecom
network operation support department". Yes I can just firewall it
off. But that relies on periodic log file auditing.
There is already an SSH listening on port 922 that is not subject to
Fail2Ban. I would rather not have SSH on port 22 at all but in the
past I have been told this would not be acceptable because some
people are sometimes on networks where they can't connect to port
922. If that would be fine with you then no need to comment but it
might be interesting to hear from anyone who would still find this a
problem.
What are the feelings about setting port 22 Xen Shell access to
require SSH public key auth (while leaving 922 to allow password
authentication as well)?
Do those of you who've added SSH keys want an option to *require*
SSH keys even on port 922?
At the very least the Fail2Ban ban time is going to have to go up
from 10 minutes to let's say 6 hours.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
Around 04:00Z I received alerts that host "snaps" had unexpectedly
rebooted. Upon investigating it had indeed reset itself for reasons
unknown starting at about 03:51Z. It wasn't a full power cycle nor a
graceful shutdown, it just reset itself with no useful log output.
Whilst all VPSes did seem to boot up okay, unfortunately it soon
became clear that "snaps" had booted into an earlier version of the
hypervisor - one without the recent Spectre/Meltdown (and
other) security fixes that were deployed last week.
At this point customer VPSes on "snaps" were operating normally
again but things could not be left in that insecure state, so after
some time spent investigating things, between 06:17Z and 06:37Z I
did a clean shut down and booted into the correct version of the
hypervisor again.
I have since established why the incorrect boot entry was
automatically chosen¹ and have fixed that problem. I have not
worked out what caused "snaps" to reset itself. We have been having
some stability issues with "snaps" over the last 6 months and I
think we are going to have to decommission it.
I will come up with a plan and contact customers on "snaps" directly
later today, but in the mean time if your VPS is on "snaps" and you
wish for it to be moved to another server as a priority please
contact support(a)bitfolk.com and we'll get that done. It will involve
shutting your VPS down and booting it a few seconds later on the
target server. None of the details of your VPS will change. Please
indicate what sort of time of day would be best for that to happen.
Apologies for the disruption this will have caused you.
Cheers,
Andy
¹ The newer hypervisor package ships an override to make sure that
the server boots into the hypervisor by default at the next boot.
This is meant to make it easier for people, but all it did was
override my actual intentionally-set default boot option with one
that wasn't suitable. This was not noticed in testing because the
testing machines had no other versions of the hypervisor present.
--
https://bitfolk.com/ -- No-nonsense VPS hosting