Hello,
When connecting via SSH to a BitFolk host for the first time (e.g.
your Xen Shell), you may be wondering how to verify that the SSH
fingerprint is genuine.
To assist you in this, a list of them has been securely published
and also uploaded into the OpenPGP web of trust.
Please see:
https://tools.bitfolk.com/wiki/Verifying_BitFolk%27s_SSH_fingerprints
for more details.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi folks,
I've had to remove support for Gentoo because I don't know how to
compile a kernel in Gentoo that has Xen support; I'm not a Gentoo
user and am not willing/able to put the time in at the moment to
learn that. Hardly anyone wants Gentoo[1] so it's hard to justify.
This doesn't mean that you can't install Gentoo. There are still
some customers at BitFolk using Gentoo, and I find it hard to
believe that you can't compile a Xen kernel "the Gentoo way", I just
don't personally know how. I'm not willing to call Gentoo supported
when the only way I know to get it going is to copy in a kernel and
initrd from elsewhere.
If you know better and are willing to write a wiki article[2] giving
step by step instructions on how to compile a Xen kernel "the Gentoo
way" then firstly, some other customers would be very happy, because
they don't seem to know either; and secondly, I'd appreciate that
a lot too because I'll feel comfortable offering Gentoo as a
supported option again.
If the article is of good quality I may also be convinced to give
you some service credit.
Cheers,
Andy
[1] http://strugglers.net/~andy/bitfolk/distros.html
[2] https://tools.bitfolk.com/wiki/Compiling_a_Xen_kernel_under_Gentoo
for example? Whatever you think is best.
--
http://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hello,
Recently it was pointed out that the Cacti bandwidth graphs that
BitFolk provides do not contain a figure for the total data
transferred in/out.
It turns out that the graph template was poorly chosen, since a
template with this information does already exist.
Here's an example of the style of bandwidth graph almost all of you
have:
http://tools.bitfolk.com/cacti/graph_910.html
Here's an example of one with total transferred figures:
http://tools.bitfolk.com/cacti/graph_2472.html
Unfortunately it is not possible to just switch existing graphs to
the different template.
So, if it seems like something you'd like to have, please send an
email to support(a)bitfolk.com asking for new bandwidth graphs.
You can of course always work out roughly what the figures would be,
from the average rate figures:
$ units '10.03kilobit/sec' 'megabytes/day'
* 108.324
/ 0.0092315646
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hello,
A while back someone pointed out to me that BitFolk's Ubuntu VPSes
don't run update-grub after a kernel upgrade/install. This can lead
to some annoyance if you reboot expecting a new kernel, or even if
you removed the one that you were using before.
We don't tend to keep test VPSes around for very long, so I didn't
notice this myself.
The reason for it is that /etc/kernel-img.conf does not contain:
postinst_hook = update-grub
postrm_hook = update-grub
So if you want update-grub to be run after upgrade/install/removal,
you should add the above to the file. The default install images
have now had this set.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi,
You may recall recently that there was a period of poor network
performance because the customer DNS resolver on 212.13.194.71 was
overloaded:
http://lists.bitfolk.com/lurker/message/20110102.221800.b90128dc.en.html
In that thread I promised to provision a new dedicated resolver to
avoid a re-occurrence of the issue.
Instead I took the opportunity to provision several new resolver
hosts in a cluster with fail over for the service IPs.
All customers should change their resolvers from:
212.13.194.71
212.13.194.96
to:
85.119.80.232
85.119.80.233
There's some maintenance coming up in February (details in a
separate email, shortly) which will take 212.13.194.71 offline for
several hours. It's therefore important that you change to using the
new resolvers before this time, otherwise you will experience severe
network performance problems.
If you have any questions please direct to users list or
support(a)bitfolk.com.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Hi,
Short version:
faustino had a kernel panic that seems related to filesystem
corruption, was power cycled, fscked and checked, VPSes started again.
Long version:
At approximately 0250Z this morning I was alerted that an
infrastructure VPS on host faustino was not responding. On
investigation it had crashed with a kernel error on the host machine
itself. Attempting to restart the VPS caused more kernel errors and
eventually a lock up, so it was necessary to power cycle the host.
After the host booted I carried out a little bit more investigation
before starting VPSes again. It seems that the host encountered a
filesystem error in its /var filesystem which the xend process was
writing to, which in turn crashed one of the VPSes (our
infrastructure one). On boot, the /var device had undergone some fsck
repair.
I forced a fsck of all filesystems on the host (i.e. ours, not
yours) and they all came back clean. I then started the
infrastructure VPS which had crashed before, and this started up
without issue.
I then, at ~0306Z, issued the command to start up all customer
VPSes, and this is still taking place. In fact it just finished as
of ~0321Z. System load will be heavy for a while as every VPS will
be doing its own fsck.
I hope that the root cause of this issue was filesystem corruption,
and it is now behind us. It could be a few other things though. The
RAM was replaced on 26th February and could be at fault. It could
also be a problem with the RAID controller.
There's no real evidence of any of that yet, so I'm going to have to
just keep an eye on things. However if further problems present
themselves then we do have a spare server almost identical to
faustino which we will swap the disks into, or replace other parts
if a clear culprit is suggested.
Please accept my apologies for the disruption.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
you were both right - I deleted files and voila emails were working again, and mysteriously the ureadahead entry disappeared when I ran -df as well.
now done to a much healthier 18% of disk usage
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda 10321208 1697448 8099472 18% /
none 293304 120 293184 1% /dev
none 297888 0 297888 0% /dev/shm
none 297888 72 297816 1% /var/run
none 297888 0 297888 0% /var/lock
none 297888 0 297888 0% /lib/init/rw
thanks very much for your help
Andrew
On Fri, Mar 04, 2011 at 01:50:53PM +0000, Andy Bennett wrote:
> Your root partition is full and you don't appear to have a separate one
> for the spools.
>
> You may find that other things like mailboxes, logs and databases files
> have all been unexpectedly truncated. Free up some space, check
> everything carefully and be prepared to restore things from backups
> where daemons have gotten into a fix before writing out their in memory
> data.
And maybe also consider asking for a disk space nagios alert (will
require either allowing check by ssh, running nrpe or snmpd).
Cheers,
Andy