Hello,
This weekend I'm off to OggCamp - http://oggcamp.org/
- if you're there too then do say hello! :)
Whilst there will be support cover and I should be available in
emergencies, this is also awkward timing because the next Ubuntu LTS
release is due for tomorrow.
Firstly I would say if you do have a support issue you need doing
before next week you should put it in now.
Secondly, if you are considering upgrading an Ubuntu VPS to 10.04 I
would recommend waiting until after the weekend. I haven't tried it
yet myself, though I do hope to do so. I think it should work, but
there might be some gotchas, and a lot of people breaking their
VPSes at once will stretch things this weekend. Plus if it does
break all I'll likely be able to do is put you back to 8.04 or
Debian.
Don't forget that I can snapshot your filesystem before you do a
major upgrade so if it all goes wrong we can roll it back to before
you started fairly easily. But you have to ask first.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
Hello,
Telehouse have finished renovating TFM1 and now want the last few
rack tenants to move their racks at some point in the next 8 weeks
so they can finish off the last bits.
Our colo supplier is coordinating with Telehouse as to when this move will
take place, because the rack power will need to be switched off
while it happens. I don't know the exact date/time yet other than
"in the next 8 weeks" but I thought I would give you a heads up.
The only server in TFM1 currently is faustino.
I'll follow up again when I have more details. If the outage will be
short then I'm pushing for a week of notice.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
PS You may have seen me complain about the state of TFM1 during the
renovations. I visited yesterday and it's much nicer now, and I
would/will put machines back in there again.
Hello,
If you don't make use of the BitFolk backup service then you might
want to skip this.
Those who have backups set up have dedicated some of their disk
space to the backups. The stuff they asked to be backed up is backed
up and multiple levels of snapshots are simulated using hardlinks.
Access is via read-only NFSv3.
I provide only read-only NFS access because:
a) I don't want people to be able to corrupt their backups; and
b) I don't want people using it for general purpose file storage
Unfortunately people do from time to time end up having things backed
up that they don't want backed up. Backing up a very large set of
files that were only temporarily needed, for example. Once the
content has been backed up once it can be difficult to get rid of
because the entire point of having multiple levels of snapshots is
that deleting the data won't delete it from all of the historical
snapshots. In many cases we are talking 6 months of storage here.
The problem comes when the amount of stuff backed up exceeds the
amount of disk space set aside for backups, and the customer wants
things removed from the backups in order to bring them back under
quota. This is at direct odds with my desire for them not to have
write access to their backups.
I also have a stronger desire to not have to poke about in people's
data, though. [1]
Unless anyone can think of any clever compromises, how about this?:
I'll delete entire snapshots for you on request.
If you back up MASSIVE_FILE and then a day later delete it,
its presence in snapshots might be like this:
/hourly.0 not present
/hourly.1 present
/hourly.2 present
.
.
/daily.0 present
/daily.1 not present
If I deleted every snapshot between hourly.1 and daily.0
inclusive then hourly.0 would become a delta to daily.1,
neither of which would include MASSIVE_FILE, thus greatly
reducing disk space usage.
This has the advantages that it means I don't have to poke about in
your files, since a whole snapshot can be treated as an opaque blob
of data for my purposes. It also could be automated reasonably
easily. The obvious downside is that it's a pretty blunt tool; if
the customer leaves MASSIVE_FILE being backed up for a long time
then potentially all their backups will need to be nuked.
Thoughts?
Cheers,
Andy
[1] "hi support, I have accidentally backed up 42GiB of extreme
stoat porn onto your backup server, please can you go in and
delete anything that looks like that so my backups can work
again, thanks."
--
http://bitfolk.com/ -- No-nonsense VPS hosting
Q. How many mathematicians does it take to change a light bulb?
A. Only one - who gives it to six Californians, thereby reducing the problem
to an earlier joke.
I don't live in UK and I maybe want to stream BBC web streams over ssh.
It was illegal before the law (te-hee), but now, according to the law, "ISPs
that fail to apply technical measures against subscribers can be fined up to
£250,000".
So, what now? BitFolk will start to monitor traffic and when certain clients
try to access BBC web streams, then they will get very angry letter saying
that they have been naughty?
Or, I'm exaggerating?
Hi,
At approximately 0530Z, kahlua rebooted itself unexpectedly. All
VPSes on kahlua have now been restarted. If you're still seeing
problems and are unable to resolve them yourself via console then
please contact support. I am still investigating and will follow up
with more info.
It was only just under 2 months ago that kahlua locked up and had to
be power cycled. This wasn't the quite same, but it's possible there
is some hardware problem here. If anything like this re-occurs I
shall be swapping the disks into a spare server at possibly short
notice.
Please accept my apologies for the disruption.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
> The optimum programming team size is 1.
Has Jurassic Park taught us nothing?
-- pfilandr
Hello,
As you may know, ext3 filesystem is by default set to require a fsck
at boot time based on both elapsed time since last fsck and number
of mounts since last fsck.
The time-based fsck can be painful, because when one of BitFolk's
servers is rebooted a high proportion of the VPSes on it won't have
been rebooted inside this time period (typically 6 months).
Therefore almost every VPS will fsck its filesystems at the same
time, causing massive IO load and a slow boot for everyone.
Having now realised this, I'm considering disabling the time-based
fsck by default.
Would it bother you to discover your VPS had been provided with
time-based fsck disabled?
Would it bother you if you one day discovered that time-based fsck
had been disabled for you without your knowledge since you aren't on
this mailing list?
In case you're interested, you can see the current settings like
this:
$ sudo tune2fs -l /dev/xvda | grep -i 'check\|mount count'
Mount count: 2
Maximum mount count: 34
Last checked: Sat Oct 17 09:10:33 2009
Check interval: 15552000 (6 months)
Next check after: Thu Apr 15 09:10:33 2010
And you can disable time-based fsck like this:
$ sudo tune2fs -i 0 /dev/xvda
tune2fs 1.41.3 (12-Oct-2008)
Setting interval between checks to 0 seconds
$ sudo tune2fs -l /dev/xvda | grep -i 'check\|mount count'
Mount count: 2
Maximum mount count: 34
Last checked: Sat Oct 17 09:10:33 2009
Check interval: 0 (<none>)
(replace /dev/xvda with whatever your partitions are -- see
/proc/partitions)
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
You dont have to be illiterate to use the Internet, but it help's.
-- Mike Bristow
I am just wondering if anybody has yet tried 10.04 on their VPS? Or if Andy
has any comments about the upgrade. I am aware that there were concerns with
the kernel version being used on the servers and this could mean upgrading
wouldn't be possible.
Thanks,
Andrew.