Modifying virtual machine images

What to do when you have a virtual machine image and for a example you need to some files contents like ssh config or so? Modified images can be uploaded to glance – repeating same step after running several VMs of the same type can be easily avoided in this way.

There are few tools that can be used for that purpose and are extremely powerful (most importantly, these are usually run in place :

Example – install wireshark & nmap in an RHEL minimal install image (we assume there is Internet access from a machine running virt-customize command) – below is basically running locally a VM, modifying it and saving the changes to the image.

Example – modify ssh server config. Following edits a file in a filesystems and saves the changes to the image.

Guestfish gives access to the filesystem – it is more powerful the virt-edit in a sense that it allows browsing through the filesystems rather than modifying a file that you know a path for. You can also create new files and add contents to them. Example sequence of step to perform is as follows:

1) OPTIONAL: export LIBGUESTFS_BACKEND=direct

2) guestfish -a <qcow2/img>

3) run

4) list-filesystems

5) mount <root_filesystem> /

6) modify files

7) umount /

8) exit

Increase LV size of a qcow2 image

When you get a qcow2 image with a given size – it can’t be simply changed on-the-fly while running a VM or by giving just more space to a VM flavor in OpenStack. Situation gets even more complex when image has LVs inside but fortunately by using guestfish and virt-resize image can be suited to one’s needs. Below are the steps that I used to perform such modifications:

Default image: image-name-250G.qcow2
Resized: image-name-750G.qcow2

1. Check which device to resize (this image has LVM created PV on /dev/sda2):

2. Resize image (from 250 to 750GB) – resizing is NOT performed in place:

3. Resize disk and specific device (in this case it is /dev/sda2) and LVM PV:

4. Go to guestfish and use free space on VGS to create additional LV, create additional filesystem and mount point (DISABLE 64bit flag on EXT4, required by this image as it uses outdated e2fsck
that doesn’t support 64bit option):

5. Upload image to glance.

Heat templates with cinder volumes

While trying to launch several VMs using heat templates on RHEL OpenStack Liberty only 2 out of 8 launched. Rest failed because cinder volumes could not be created. 

heat stack status was CREATE_FAILED and cinder volume status was error

/var/log/cinder/volume.log

For a brief moment there was some log on a console related to haproxy failure

While checking the services I noticed that both haproxy and swift-proxy are not running so just decided to restart them

systemctl restart haproxy.service

systemctl start openstack-swift-proxy.service

After that I could again create cinder volumes via heat templates. So be careful – create VMs from templates with cinder one by one

Accessing cloud images

For both Fedora and Ubuntu cloud images we can access them using ssh public key generated during bootup.

Key can be obtained from a console log:

  • Horizon -> Instances -> <fedora_ubuntu_instance> -> Console -> View Full Console Log
  • sudo vim /var/lib/nova/instances/<instance_id>/console.log

SSH can be done from from a proper ip namespace on compute nodes:

Or from any different host if Public IP is associated with the instance.

Nova installation on OpenSUSE 13.1

Firewalld
  • Firewall rules can be modified from GUI:
  • Assign internal interfaces to “Internal zone”
  • Add SSH service to “External zone”
RabbitMQ

In order to allow to access rabbitmq from a remote (and local) host to rabbitmq server before starting rabbitmq-server add in /etc/rabbitmq/rabbitmq.config:

Instance creation fails with “Cannot find suitable CPU” on Debian 7

KVM by default does not work (even though vmx/smx is enabled for CPUs) – instance creation fails with “Cannot find suitable CPU”.

Solution:

  • In /etc/libvirtd/qemu.conf:
  • remove file /var/cache/libvirt/qemu/capabilities and restart host (libvirtd restart does not suffice)
  • After restart check capabilities with:

VNC and firewalld on Centos 7 compute node

By default firewalld service is enabled and may cause troubles in accessing via VNC to instances running on Centos compute node.

Solution:

By default zone=public is used, so it is enough just to add 5900/5901 ports or/and vnc-server service to this zone:

 Result:

Nova installation on Centos 7

RabbitMQ

After starting rabbitmq-server service it is not possible to change password using:

 Error:

Solution:
In file /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service change

to – add sudo

And restart rabbitmq-server:

 

Nova incompatibility

Packages for nova which are downloaded with yum contain nova version which is incomaptible with that is running on controller node (Ubuntu):

Controller (Ubuntu 14.04):

Compute (Centos 7):

Problem is described here:
https://ask.openstack.org/en/question/57296/juno-centos-7-buildabortexception-build-of-instance-aborted-failed-to-allocate-the-networks-not-rescheduling/?answer=57883#post-id-57883

Solution:
Install packages from EPEL repo manually (only 3 days older):

https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/

Packages to install:

I had to forcefully install them using rpm -ivh –force then I restarted openstack-nova-compute service.
After this instances spawned correctly, yet I could not reach them (iptables rules missing, so I disabled iptables).

DHCP addresses got distributed to the instances correctly and I could reach internet from them via my Ubuntu-AIO which acts as a gateway to the internet as well.