Hi there,
I'm trying to troubleshoot an issue on my home network, using my Bitfolk VPN. I'm pretty sure the issue is with my ISP's network, but to be sure, is there any thing on Bitfolk's network that would be filtering incoming UDP packets to port 500?
That's my question, but for the sake of clarity, this is the issue I'm actually trying to solve.
I can't get WiFi calling to work on my home network. It used to work, but around the time I got a new router from the ISP (hyperoptic), it stopped working. I am pretty ignorant about how WiFi calling actually works, but it seems like it needs to send to UDP 500 to establish a tunnel into the telco network.
I used netcat to try sending packets to my BitFolk host, and netcat on said bitfolk host to receive them. It seems I can send and recieve to port 499 and 501, but not port 500.
My conclusion is that my ISP is somehow filtering out 500/UDP, but I need to know that it definitely isn't something at the bitfolk end, before I start wading through Hyperoptic's support tiers.
I am aware that Hyperoptic use Carrier Grade NAT, but I pay extra for a static IPv4 so that *shouldn't* be an issue.
Also, here's the Layer Four Traceroute for one of the EE WiFi calling gateways:
sudo lft -z -u -d 500 109.249.190.48
Tracing ......**********
TTL LFT trace to 109.249.190.48:500-516/udp
1 _gateway (192.168.0.1) 0.5ms
2 141.xxx.xxx.xxx.bcube.co.uk (141.xxx.xxx.xxx) 8.0ms # (redacted, my IP)
3 172.16.23.244 2.3ms
4 172.16.16.77 2.0ms
5 172.17.12.16 1.9ms
6 172.17.10.148 7.0ms
** [500-516/udp no reply from target] Use -VV to see packets.
If anyone can assure me that it should be possible to recieve port 500 UDP packets at Bitfolk, that would be great, but happy to hear if anyone has any other insights into why WiFi Calling doesn't work for me, that would also be great.
Thanks,
--
Misha Gale
PGP Public Key: 0x1986B8E1 https://mishagale.co.uk/pubkey.asc
Hello,
If you are using the CBL DNSBL in your mail filtering setup (I was)
or for any other purpose, please note that it has shut down:
https://www.abuseat.org/cutover.html
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hello,
Most (all?) versions of sudo have a bug where local unprivileged user
can get root access:
https://www.openwall.com/lists/oss-security/2021/01/26/3
Updates are already out for most distributions that are still
receiving security updates. If yours isn't then you might want to
remove sudo (and think about an upgrade).
This is CVE-2021-3156.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi all,
I have had a VPS with Bitfolk for the past twelve years and before
that was able to use HantsLUG to host my genealogy related website.
Things have moved on and my wife and I are no longer doing family
history research, my wife's co-researcher who I was keeping my website
on line for recently died.
So I no longer need a VPS so as it is due for renewal shortly so
decided time to call it a day.
I am grateful to Andy for providing the service and to all those Hants
Luggers who helped me over the years. I have never regretted moving my
loyalty from RedHat to Debian and even managed to persuade my wife to
dump Win7 last year and let me install Debian Bullseye on her
computer.
i will be 90 years old next year but have let that hold me back and
until lockdown closed gyms was doing resistance training 3 times a week
and still workout at home to programs my Personal Trainer send me every
two weeks making full use of the small amount of gym kit I have.
I am also planning ahead and intend having a flying lesson in a
gyrocopter for my 90th birthday and doing a skydive on my 100th.
Wishing you all a prosperous and covid free 2021
John Lewis
Hello,
With the news that RHEL will be free for up to 16 servers:
https://arstechnica.com/gadgets/2021/01/centos-is-gone-but-rhel-is-now-free…
is anyone willing to spend some time trying to install it in a
chroot then getting it to boot under Xen PVH mode?
Assuming it is possible to access the RPM files, I assume the
process will be very similar to the current CentOS 8 process:
https://tools.bitfolk.com/wiki/Installing_CentOS_8
which can be summarised as:
1. Prepare a chroot
2. Install CentOS base system into it
3. Enable EPEL repository to switch it to kernel-ml package so that
it works under Xen.
I will provide the VM account to do it, and some amount of account
credit once it is done and documented in the wiki.
I have no idea what hoops one must jump through to get a Red Hat
developer account nor if it is possible to download the RPMs like
that once you do. If I did I'd do this myself!
Please Contact me off-list if interested in helping out.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hello,
I don't think there's many¹ of you running Fedora, but as of
kernel-core-5.9.8-100.fc32 they switched their kernel compression
method from gzip to zstd.
Similarly to Ubuntu — which switched to lz4 from 19.10 onwards —
this leaves it not bootable in Xen PV mode as the PV boot loader
doesn't understand zstd (or lz4) compression.
This may not be obvious to you as this has happened in the middle of
a release and I don't think it is announced anywhere that the
compression method was changed. Nor would such an announcement
necessarily prepare you for the sudden boot failure in any case.
The simplest way forward is to switch to PVH mode:
https://tools.bitfolk.com/wiki/PVH#Fedora
If for some reason you don't want to switch to PVH mode², you will
need to get a kernel that is not compressed with zstd. Possibly
there are other kernels available for Fedora, or you could use
extract-vmlinux to decompress the packaged one.
Cheers,
Andy
¹ We don't have an installer for it, but it can be installed from
the Rescue VM, and at least two of you did that.
² I don't know of any reason not to use PVH mode.
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
A question for those who use the backup service¹:
Currently we mark the backup run as successful if rsync exits with a
success value.
There are only two exceptions: exit code 2 and exit code 24. Both of
those relate to files which rsync thought existed but ended up not
existing when it came to actually transfer them. I consider those
transient issues related to backing up a filesystem that is in use,
and not a reason to consider the whole backup run as failed.
So what about files that our rsync process cannot read? At the
moment that produces an exit code of 23, and is considered a failed
run, even though everything else got transferred. This eventually
causes a "backup age" monitoring alert because the last successful
backup run was too long ago. Even though everything else is actually
being backed up.
If we consider error code 23 as okay then a backup run that failed
to transfer one or more files due to permissions is still considered
a success and the alert goes away. But you possibly never find out
about what happened because you don't get to see the logs, you would
have to check every file in your backups to be sure they're there.
If we continue to consider error code 23 as a failure of the whole
run then you will have to either allow our rsync to read the files
concerned or else put up with perpetual alerts - which you could
silence but then would never tell you about other problems.
What should we do?
Note that most of you allow our rsync to run as root so it can
generally read everything and you'll never experience this. But in
theory you could if you found some way to deny root permission to
read something.
I would ask for opinions only from those who make use of the backup
service, as root or not.
Cheers,
Andy
¹ https://tools.bitfolk.com/wiki/Backups
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Hi,
As of about 0000Z we started receiving alerts of packet loss and
began investigation. It was found to be an internal issue between hosts
"clockwork" and "limoncello" only. That is, everything on both hosts
was reachable from outside our network and also from inside as long
as it wasn't between those two hosts.
As there is a monitoring node on "limoncello", a number of alerts
were sent out regarding customer services on "clockwork" that it
considered to be down, but they weren't actually down - unless you
happened to be hosted on "limoncello", anyway, and vice versa.
I tracked the issue to one of the two bonded switch ports for
"clockwork"; bringing that interface down and up again appears to
have cleared it. That happened at about 0045Z.
If the problem reoccurs we can down the interface and have it run on
one interface until the port or switch can be changed. If the
problem is actually in the NIC of the server itself things will be
more tricky, but we'll cross that bridge if we come to it.
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi,
I’m trying and failing to get overlay network traffic working between Docker containers on different VPS hosts. The issue seems to be that neither host is sending VXLAN data on port 4789. I’ve been getting help on Docker’s IRC channel and the suggestion there is that issues have been seen before with this and interaction with other virtualisation technologies (VMware, for example) which also use VXLAN. Does anyone know if there are issues using swarm/overlay networking between Xen VPS hosts?
Regards,
Chris
—
Chris Smith <space.dandy(a)icloud.com>