I certainly don’t need anything as complex as all
that. I’m not
dealing with anything mission-critical as such and I don’t need it to
work seamlessly to the user either, I just need it to be automated.
The more I think about this, the more I realise that all I need is an
automated way to detect a problem and reboot the server. Ideally,
this would happen from an external VPS or other system being able to
poke Xen (in case the server cannot restart itself) and ideally this
would be tied in to the existing BitFolk monitoring services (if
possible) to avoid re-inventing the wheel.
Regards,
Chris
—
Chris Smith <space.dandy(a)icloud.com>
On 22 Feb 2019, at 02:02, Anthony Newman via
users
<users(a)lists.bitfolk.com> wrote:
On 2019-02-21 14:41, admins wrote:
In a previous life we ran a pair of HA load
ballancers, serial
connected heartbeat and with STONITH, in a Primary Backup config,
as a front end to a whole bunch of ISP type services.
Then had these point to the services (multiples thereof). The
published IP/s on the load ballancers were virtual or floating and
moved between the two.
It gave us a lot of flexibility to adjust the load balancing
parameters sometimes pressing the same in to service for fail-over
of a back-end service, or dropping back end services out of the
list for maintenance or reloads. When you do this though it kills
any TCP sessions and it is up to the client to re-establish these
but the load ballancers just point them at a different service when
the re-connect happened. The state was lost though and each
re-connect was a new application session. For web servers or squid
proxys this does not matter much though
<snip other stuff>
It's 2019 and I wouldn't wish the product formerly known as "Linux
HA" on my worst enemy. Maybe that's overstating it a teeny bit, but
it's not that far from the truth.
Heartbeat/Pacemaker/Corosync/STONITH are horrible to configure and
use, and are likely to lead to lower availability when used
inexpertly, which is very easy to achieve.
keepalived/LVS is a simpler and superior way to manage services
which require (or can manage with just) TCP load balancing and/or
failover, and it can even share TCP connection state so connections
aren't interrupted when handing over between redundant load
balancers. It's just Linux kernel plus a bit of user-space, so fast
and robust. It has been around since the days of Linux 2.4, but for
some reason seems less well-known than the linux-ha abominations. It
can even do MAC-based forwarding to hosts (on the same subnet or via
a tunnel), so you can handle high-bandwidth traffic flows without
carrying the majority server-to-client traffic through the load
balancer.
At a pinch it can also run scripts on state change, but at that
point the OP needs to understand exactly what they're doing to
achieve resilient service because again it's not necessarily
straightforward to get what you want to happen automatically without
bad things happening that you didn't expect or plan for (see
"automatic database failover").
For HTTP it often doesn't matter though, as people have already
said. haproxy is inaptly named, other than the fact that individual
instances tend to be very reliable, but it sits well for HTTP
load-balancing on top of LVS if for some reason plain connections
with TCP are not enough. People seem to love to stack
haproxy/nginx/Apache/etc. reverse HTTP proxies for some reason.
Ant
_______________________________________________
users mailing list
users(a)lists.bitfolk.com
https://lists.bitfolk.com/mailman/listinfo/users
_______________________________________________
users mailing list
users(a)lists.bitfolk.com