Terraform or Pulumi for IaC

Terraform by Hashicorp is a pioneer in the area of IaC tooling which allows to define infrastructure in a declarative manner by using its own language (HCL – Hashicorp Configuration Language). That means:

  • it takes definition of the infrastructure (simplistic example could be a couple of VMs running in AWS or OpenStack) found in the config file (can be a single .tf file) defined in HCL
  • connects to the IaaS by using an appropriate provider – plugin/extension that allows to talk to AWS/Openstack etc.
  • compares current state of the infrastructure against what is found in the definition file (.tf)
  • tries to meet the desired state found definition file (using example below it means to deploy VMs)

Approach as above allows to have a deterministic state of your infrastructure and deployment is performed by calling API of the infrastructure provider rather than manually or using different methods already offered by them (e.g. HEAT templates for Openstack, python client vs CloudFormation for AWS, awscli tool). If a VM is deleted for any reason on the IaaS re-running the terraform apply (command to check current state of the infra, comparing against the definition file and trying to meet the desired state from the file) will just deploy the missing VM. 

It allows to have a common language for defining the infrastructure on different infrastructure providers (bearing in mind that each of them use basic different building blocks to create the infrastructure e.g. aws_subnet resource vs openstack_networking_subnet_v2 resource). 

Terraform allows for more than just creating resources on public/private cloud providers, it can deploy on kubernetes (more of a PaaS than IaaS), or interact with RESTAPI. People/companies create their custom providers.

However, anyone with some development background will see that that HCL language is limited – even writing a simple loop, while possible – it is somehow done indirectly (I admit I did not have time to check 0.12 option which was supposed to add some typical programming constructs like for loop). Immediately the feeling is that it should be done differently – you want to write a code and have appropriate programming libraries to wrap API calls to different providers. This is where Pulumi comes in – its tenets are the same:

  • it is declarative
  • it compares the current state against the definition (defined in the programming language of choice – at the moment of writing JS/TS/Python/Golang are supported)
  • uses programming language libraries to connect to IaaS (e.g. pulumi_gcp in Python) which extend core pulumi framework
  • tries to meet the desired state (pulumi up vs terraform apply)

What is new is that it’s SaaS by default and requires to setup an account on Pulumi website. Pulumi service keeps state on Amazon S3, but there is also an option to keep it locally (different backends). If it is kept on Pulumi then you can check your created infrastructure resources via their website (actually this is very cool, you can even click to connect to created VMs consoles’).

Rather than having a separate language to define the infrastructure you just import appropriate libraries and start defining the infrastructure directly in code. There is no better source of introduction to Pulumi than Joe Duffy’s blog post: http://joeduffyblog.com/2018/06/18/hello-pulumi/.

So which one should you choose? I would say it depends on the people who are going to use it:

  • Developers, people with development skills should opt for Pulumi so as not to be constrained by HCL. That also means learning curve is steeper for Pulumi.
  • People without such skills will feel more comfortable with HCL and might not feel constrained by it. They should go with Terraform. I just felt I had to create a lot of boilerplate with it.
  • Terraform is more mature (older tool) but Pulumi’s people are quite helpful and available on Slack so it is just kicking off.

I have posted few examples with both tools in my gitlab repo:

https://gitlab.com/stackblog/terraform

https://gitlab.com/stackblog/pulumi

Juniper vMX – MPLS over lt interface

I was wondering if I can run MPLS over OVS, VXLAN tenant networks in Openstack (for example VLAN tagging won’t work in such case, not by default at least, didn’t have an option to test with vlan_transparent setting on network level). Wanted to quickly check it out between a pair of vMXes with static LSPs remembering I can run logical-systems or routing-instances interconnected between themselves with tunnel interface (lt). That works just fine but only with logical-systems.

LS-vmx01 lt-0/0/0.1<–>lt-0/0/0.2 vmx01 ge-0/0/2.0<–>ge-0/0/2.0 vmx02 lt-0/0/0.2<—lt-0/0/0.1 LS-vmx02

vmx01 cfg:

vmx02 cfg:

Main thing here was to set family mpls on lt-0/0/0 units and add lt-0/0/0.2 interface to protocols mpls.

Only after that POP entries appeared in routing and forwarding tables:

vSphere ESXi access without vSphere client

Today I stumbled on quite irritating problem. If you are OSX or Linux user and your ESXi setup doesn’t have vCenter (which comes with a price) then you are:

  1. Doomed to run a VM with Windows and install a vSphere client there (it requires Windows and .NET)
  2. Lose your time over trying to run it using Wine (there are instances where where with specific wine version and vSphere client it could work – I didn’t manage) or Crossover (if on Mac)
  3. Install VMware Fusion (if on Mac) – but it is a paid solution, on the other hand you might already have it – it is capable of connecting to vCenter and ESXi hosts.
  4. Install ESXI host client UI https://labs.vmware.com/flings/esxi-embedded-host-client

Following assumes there is internet connectivity from your ESXi host (but that’s not a big deal if there is not, just SCP to upload vib package), so connect to your ESXi host over SSH and then:

Now you can access your ESXi host over UI: https://<esxi_host_ip/ui/

ConfD – making your network elements programmable

Tail-F, originally a Swedish company bought over by Cisco Systems, created a management agent software framework for network elements and applications – ConfD. If employed as a part of software – ConfD allows to be programmability an inherent part of it. This post will us a very tangible example of its power – while Linux natively doesn’t offer NETCONF to manage interfaces – with use of ConfD and special program that makes use of it – programmability is made possible.

ConfD uses, what most of up-to-speed network engineers should know, YANG – a data modeling language for NETCONF (if we treat NETCONF as a new SNMP then YANG is like SMI – “special version” of ASN.1 was for SNMP) – better description can be found here: http://www.tail-f.com/what-is-yang/

The way to depict what ConfD does is – let’s say if you create your own program that acts as a router and you need a CLI for e.g. show ip route and additionally you would like to have a possibility to check routes over the NETCONF then it’s going to be done automatically for you. ConfD will run as a process next to your routing process and will manage it’s configuration, offer CLI and NETCONF (also RESTCONF/SNMP/WebAPI if Premium version is used). Of course that means that a if you want to implement ConfD shall be a part of implementing your software, whole configuration, CLIs etc.

There are 2 versions of confd available – free (basic) and more capable (paid) premium – comparison can be found here: http://www.tail-f.com/confd-basic

More on how ConfD works and its composition (for example there is a database – CDB – that keeps the config) can be found on Tail-F site.

Let’s get to business, demonstration of how it can be used is based on:

  • confd basic 6.3
  • ydk 0.7.1 (there will be a separate post on ydk, installation, usage and what is it)

The very reason for using confd 6.3 is that newer versions i.e. 6.4 and 6.6 use YDK 1.1 for modelling and example program I wanted to test with YDK on newer confd versions uses YANG 1.1 while YDK has just partial support for it (full support is for YDK 1.0 – RFC 6020).

In order to install confd:

  1. Download the 6.3 and 6.4 versions appropriate for you os from https://developer.cisco.com/site/confD/downloads/
  2. Install it: 

  3. Source confdrc file (so as to have access to confd cli tools:

  4. Run some example intro example:

  5. access confd CLI by running confd_cli command unless you started using make cli that takes you directly to CLI

 

Run the example:

Example I would like to use is called linuxcfg but id doesn’t work out of the box. When both versions 6.3.and 6.4 are installed the Makefile in ipmibs directory must be overwritten in 6.3 examples folder:

When this is done you can compile the example.

ip a from linux level:

show ip ipAddressEntry from confd level:

Ok so maybe output is not ideal like… IP addressing in HEX.

You can also use pipes and regex to filter what you only need, you can save output, export to csv, json, curly-braces (like Juniper CLI) etc.

From now on you are on your own….just kiddn’ in the next post I will show how to setup YDK and query linucfg over NETCONF with YANG models provided with this example. Now we have a running linuxcfg with confd that exposes NETCONF. More to follow…

Modifying virtual machine images

What to do when you have a virtual machine image and for a example you need to some files contents like ssh config or so? Modified images can be uploaded to glance – repeating same step after running several VMs of the same type can be easily avoided in this way.

There are few tools that can be used for that purpose and are extremely powerful (most importantly, these are usually run in place :

Example – install wireshark & nmap in an RHEL minimal install image (we assume there is Internet access from a machine running virt-customize command) – below is basically running locally a VM, modifying it and saving the changes to the image.

Example – modify ssh server config. Following edits a file in a filesystems and saves the changes to the image.

Guestfish gives access to the filesystem – it is more powerful the virt-edit in a sense that it allows browsing through the filesystems rather than modifying a file that you know a path for. You can also create new files and add contents to them. Example sequence of step to perform is as follows:

1) OPTIONAL: export LIBGUESTFS_BACKEND=direct

2) guestfish -a <qcow2/img>

3) run

4) list-filesystems

5) mount <root_filesystem> /

6) modify files

7) umount /

8) exit

Increase LV size of a qcow2 image

When you get a qcow2 image with a given size – it can’t be simply changed on-the-fly while running a VM or by giving just more space to a VM flavor in OpenStack. Situation gets even more complex when image has LVs inside but fortunately by using guestfish and virt-resize image can be suited to one’s needs. Below are the steps that I used to perform such modifications:

Default image: image-name-250G.qcow2
Resized: image-name-750G.qcow2

1. Check which device to resize (this image has LVM created PV on /dev/sda2):

2. Resize image (from 250 to 750GB) – resizing is NOT performed in place:

3. Resize disk and specific device (in this case it is /dev/sda2) and LVM PV:

4. Go to guestfish and use free space on VGS to create additional LV, create additional filesystem and mount point (DISABLE 64bit flag on EXT4, required by this image as it uses outdated e2fsck
that doesn’t support 64bit option):

5. Upload image to glance.

Automatically provisioning baremetal/VM server

During my endavours I had a situation where I had to provision 10 servers (install all of them manually and configure same things on all of them, same files etc).

There is a nice alternative to it called Stacki from StackIQ (bought by Teradata last year). What it offers is specialized PXE server that is used to boot baremetal/VM servers (CentOS/Redhat/Ubuntu). 

Its architecture look as follows (Stacki server == Frontend, server to be provisioned == backend):

Firstly in CSV file you prepare a list of hosts with their MACs and as a next step you add puppet that will be used to provision the servers after booting.

More can be found here: 

https://github.com/Teradata/stacki

Frontend machine can be a VM – actually it worked pretty nice – tested with provisioning other VMs.

Kill a hanging machine in VMware ESXi

It might not be possible to kill a “running” (in reality hanging) VM from Vcenter GUI or using vSphere Client. 

In such situation a login via SSH to ESXi host is needed, list all VMs and kill the appropriate one.

Connecting over SSH and running a command over NETCONF

When HW/virtualized/containerized network element offers NETCONF interface to manage, it is extremely beneficial to use it for repetitive tasks (upgrades, sanity checking, route table checking etc).

NETCONF can be used over different transports as below:

In case SSH is used then it must be made sure that SSH subsystem is enabled in SSH config on a device. 

If a YANG model is available then TailF offers a client and Java class generator under:

https://github.com/tail-f-systems/JNC

Alternatively, as in the example below, Python can be used to manually connect over SSH and based on YANG model (if used) instruct the device to perform a specific task.

There is a library in Python for that purpose called ncclient (NETCONF client): 

https://github.com/ncclient/ncclient

Firstly we need to do the proper import in our client script after installing the library:

Let’s define method used for connecting to the device:

Create an object class that inherits from RPC class of ncclient library and define a method that will be compose an XML NETCONF message based on YANG model:

Connect to the device and perform requested action (probably not the safest way to use clear text password to connect):

The request in NETCONF formatted XML would look as follows:

And a corresponding YANG model:

Lastly in XML: