Hi Steven,
On Tue, Nov 10, 2015 at 05:04:07PM +0000, Steven Walker wrote:
I ran install ubuntu_trusty, went through the
installation procedure,
formatted the drives, no encryption, used only the basic server and
opensshd options, watched it shutdown and then ran the boot command.
This it what it gave me:
xen-shell> status
Guest: Shutdown
xen-shell> boot
Booting instance: simpee
Parsing config from /etc/xen/simpee.conf
libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader
failed - consult logfile /var/log/xen/bootloader.113.log
I'm sorry to see you're having problems.
Here's an asciicast I just made of me installing Ubuntu 14.04 i686
on the same host that you're on:
https://asciinema.org/a/ckiwp5sm75kbll77wj3r23myt
(7m33s of me tippy tappy typing in real time, so sorry if it's a bit
tedious)
This should be a replication of exactly how a customer would do a
self-install, right down to connecting to Xen Shell from my own
desktop and selecting mostly all the defaults. The only slight
difference is that this account already had something installed, so
I had to delete some partitions in the manual partitioner first. Oh,
and it's called debtest1 because I normally use it to test
installing Debian, but that is purely cosmetic.
As you can see, that works.
I suspect what has happened to you is that somehow grub-pc (GrUB
2.x) has become installed. Only grub-legacy is supported at the
moment¹, but the self-installer is not meant to leave you with
grub-pc, so it's still a bug. Did your install deviate significantly
from what I did?
The other thing it might be is that I see you have a quite
interesting partition layout. You have an extended partition for
xvda1 and then the only thing inside it is a single Linux partition,
xvda5. It is possible that this is confusing pygrub. How did you
carry out the partitioning stage?
In general if your installer insists on using partitions² then I
recommend just creating one primary (DOS) partition that takes up
the whole of each disk, i.e. so you end up with only xvda1 and
xvdb1. The reason for this is to make it easier to grow those disks
later on, should you order more storage. It's pretty trivial to grow
a disk that has only one partition (or no partitions). When there's
multiple partitions it's too complex and we'll leave the customer to
do it.
Obviously if this is the cause of your problems then it's still
something I'd rather fix in the installer than just document as "not
supported", for user interface reasons.
Does your broken install still exist, for me to examine?
Just to be sure I repeated the install twice with the
same result. The
first installation did boot up with a similar error message but I
could not log in after booting
It might be because the new IP address 85.119.83.139 has not been
updated in the DNS records. Maybe I missed something in in the
installation options?
Depends what "could not log in" means. :)
Then I tried to work out what was going on using the
Bitfolk Rescue
Environment. This gave me a 'Rescue' login prompt. It is not clear
which user name and password it needs but I tried both the Bitfolk and
Ubuntu users but could not log in.
Here is some more information about the rescue VM:
https://tools.bitfolk.com/wiki/Rescue
You should have seen a block of text printed before the login prompt
which tells you what the (randomly-generated) credentials are.
There's examples of what it looks like on that page, the first one
being under the "persistent storage" heading.
The last paragraph above the login prompt should say:
Your user account is called 'user' and its password has been
randomly-generated (see above). Be careful what you do with it
as networking is now active and sshd is running. The 'user'
account has full sudo access.
rescue login:
Did yours not look like that?
If it did, then clearly this is still too confusing. Do you have any
suggestions on how to make this clearer?
In any case, if you are now looking at a "rescue login:" prompt and
have somehow lost the credentials off the screen, you can just
reboot the rescue VM to get a new set of credentials:
Type the usual GNU Screen command of ctrl+a then c to create a new
window, which will be sat at the Xen Shell, and then use the
"destroy" command to kill off the rescue VM. You can then type
"rescue" again to boot a new one.
Some more information about the Xen Shell:
https://tools.bitfolk.com/wiki/Xen_Shell
Cheers,
Andy
¹ Support for GrUB 2.x is coming soon, but we need to complete the
hardware refresh first.
² On virtualised hardware the concept of disk partitions is a bit
pointless; as disks can be added and removed very easily, you may
as well just put filesystems directly on the block device. Need a
new partition? Add it as a new block device. More complicated than
that? Use LVM.
That's all very well but unfortunately some of the Linux
installers refuse to install directly onto disk devices, and
insist upon creating a partition table and at least one partition
on each disk. The Debian/Ubuntu installer is one of these, so you
have to do it.
--
http://bitfolk.com/ -- No-nonsense VPS hosting