Juniper vMX – MPLS over lt interface

I was wondering if I can run MPLS over OVS, VXLAN tenant networks in Openstack (for example VLAN tagging won’t work in such case, not by default at least, didn’t have an option to test with vlan_transparent setting on network level). Wanted to quickly check it out between a pair of vMXes with static LSPs remembering I can run logical-systems or routing-instances interconnected between themselves with tunnel interface (lt). That works just fine but only with logical-systems.

LS-vmx01 lt-0/0/0.1<–>lt-0/0/0.2 vmx01 ge-0/0/2.0<–>ge-0/0/2.0 vmx02 lt-0/0/0.2<—lt-0/0/0.1 LS-vmx02

vmx01 cfg:

vmx02 cfg:

Main thing here was to set family mpls on lt-0/0/0 units and add lt-0/0/0.2 interface to protocols mpls.

Only after that POP entries appeared in routing and forwarding tables:

Modifying virtual machine images

What to do when you have a virtual machine image and for a example you need to some files contents like ssh config or so? Modified images can be uploaded to glance – repeating same step after running several VMs of the same type can be easily avoided in this way.

There are few tools that can be used for that purpose and are extremely powerful (most importantly, these are usually run in place :

Example – install wireshark & nmap in an RHEL minimal install image (we assume there is Internet access from a machine running virt-customize command) – below is basically running locally a VM, modifying it and saving the changes to the image.

Example – modify ssh server config. Following edits a file in a filesystems and saves the changes to the image.

Guestfish gives access to the filesystem – it is more powerful the virt-edit in a sense that it allows browsing through the filesystems rather than modifying a file that you know a path for. You can also create new files and add contents to them. Example sequence of step to perform is as follows:

1) OPTIONAL: export LIBGUESTFS_BACKEND=direct

2) guestfish -a <qcow2/img>

3) run

4) list-filesystems

5) mount <root_filesystem> /

6) modify files

7) umount /

8) exit

Increase LV size of a qcow2 image

When you get a qcow2 image with a given size – it can’t be simply changed on-the-fly while running a VM or by giving just more space to a VM flavor in OpenStack. Situation gets even more complex when image has LVs inside but fortunately by using guestfish and virt-resize image can be suited to one’s needs. Below are the steps that I used to perform such modifications:

Default image: image-name-250G.qcow2
Resized: image-name-750G.qcow2

1. Check which device to resize (this image has LVM created PV on /dev/sda2):

2. Resize image (from 250 to 750GB) – resizing is NOT performed in place:

3. Resize disk and specific device (in this case it is /dev/sda2) and LVM PV:

4. Go to guestfish and use free space on VGS to create additional LV, create additional filesystem and mount point (DISABLE 64bit flag on EXT4, required by this image as it uses outdated e2fsck
that doesn’t support 64bit option):

5. Upload image to glance.

Heat templates with cinder volumes

While trying to launch several VMs using heat templates on RHEL OpenStack Liberty only 2 out of 8 launched. Rest failed because cinder volumes could not be created. 

heat stack status was CREATE_FAILED and cinder volume status was error

/var/log/cinder/volume.log

For a brief moment there was some log on a console related to haproxy failure

While checking the services I noticed that both haproxy and swift-proxy are not running so just decided to restart them

systemctl restart haproxy.service

systemctl start openstack-swift-proxy.service

After that I could again create cinder volumes via heat templates. So be careful – create VMs from templates with cinder one by one

Accessing cloud images

For both Fedora and Ubuntu cloud images we can access them using ssh public key generated during bootup.

Key can be obtained from a console log:

  • Horizon -> Instances -> <fedora_ubuntu_instance> -> Console -> View Full Console Log
  • sudo vim /var/lib/nova/instances/<instance_id>/console.log

SSH can be done from from a proper ip namespace on compute nodes:

Or from any different host if Public IP is associated with the instance.

Nova installation on OpenSUSE 13.1

Firewalld
  • Firewall rules can be modified from GUI:
  • Assign internal interfaces to “Internal zone”
  • Add SSH service to “External zone”
RabbitMQ

In order to allow to access rabbitmq from a remote (and local) host to rabbitmq server before starting rabbitmq-server add in /etc/rabbitmq/rabbitmq.config:

Instance creation fails with “Cannot find suitable CPU” on Debian 7

KVM by default does not work (even though vmx/smx is enabled for CPUs) – instance creation fails with “Cannot find suitable CPU”.

Solution:

  • In /etc/libvirtd/qemu.conf:
  • remove file /var/cache/libvirt/qemu/capabilities and restart host (libvirtd restart does not suffice)
  • After restart check capabilities with:

VNC and firewalld on Centos 7 compute node

By default firewalld service is enabled and may cause troubles in accessing via VNC to instances running on Centos compute node.

Solution:

By default zone=public is used, so it is enough just to add 5900/5901 ports or/and vnc-server service to this zone:

 Result: