Why Insomnia and not Postman?

Well, answer is straightforward – apples vs oranges. Somewhat I prefer Insomnia, though it is just a simplistic API client. But this is exactly what I normally need without additional features and Insomnia seems to be fulfilling my daily needs.

  • Depending on against what environment you run your API calls you can define a variables and their values per environment e.g. lab/prod. (Left top corner). Templating relies on Nunjucks

variable values for Dev environment

  • Allows to refer to results of different API calls e.g. Authorization Bearer token value – in the example below Fetch users requires Bearer token to be included in GET request hence it needs to perform first Authorization, obtain token and insert it into GET request:

Authorize URL looks as follows (IMPORTANT: never embed usernames, passwords in config files – rely on solutions like Hashicorp’s Vault, Azure KeyVault etc):

By peeking at results of the POST (Timeline) response from the server includes authorization Header that includes token value.


Subsequent Method can use the value just by manually copy pasting value of the header into authorization header of GET request:

Smarter solution is to define header in Header tab and use a function that will call POST Authorize method, grab the resulting authorization header value (Bearer token) and insert it into a header in the GET request. In value type CTRL+Space and look for function Response – Header:

Double click on red field and Edit tag window will pop up:

Now your requests will be using appropriate value for authorization header.

IMPORTANT: in order to have Edit tag pop up available you cannot enable (leave untickedRaw template syntax as otherwise you’ll have manually craft in Nunjucks value of the header.

Scanning docker container images

Looking for potential vulnerabilities in docker images is crucial before shipping these to customers or putting them into production. Scanning of images should be a part of any CI pipeline so that it’s ensured that shipped software is secured as possible and security vulnerabilities and CVEs are detected early in the process.

Docker engine comes in with 

that runs on Snyk engine to detect CVEs and vulnerabilities. A result of the scan will show also potential remediations (e.g. use newer base image). 

More details on: https://docs.docker.com/engine/scan/

Another solution is trivy from Aqua Security that serves a similar purpose https://github.com/aquasecurity/trivy. Running it after installation is as simple as 

openstack cli in a docker container

If you need to access your openstack cluster but there is no option to install packages on a jumpbox host that can access the cluster (lack of internet access or privileges) an alternative is to build locally a docker image that includes openstack CLI utility. Assumption is that the jumpbox host has docker installed and user can load docker images and run docker containers.

Firstly prepare a Dockerfile (based off python docker official images in this example):

Build docker image and provide as an argument path to stackrc file that includes details on how to access the cluster (endpoint, passwords etc).

After build is finished save docker image to tarball:

Copy tarball to a destination machine which can access openstack cluster and load docker image:

Run the container – you’ll and try listing e.g. servers

AWS CloudFormer

Creating in AWS can performed using multiple ways:

  • AWS SDK (Java/Python/JS)
  • IT automation/IaaC tools (Chef, Puppet, Terraform, Ansible, SaltStack, Pulumi)
  • REST API directly
  • CloudFormation templates

First 3 underneath are using REST API, but using directly REST API to create resources in AWS might be cumbersome due to authentication that needs to be performed on REST API requests.

  • CLI/SDK/REST API/Chef/Ansible are more of an imperative/procedural way to define infrastructure (where each step must be explicitly defined so as to create a desired state of it)
  • Automation tools such as Terraform/SaltStack/Puppet/Pulumi are examples of declarative way (a definition of desired state is provided and it is up to a tool to create a desired state in AWS). 

CloudFormation is declarative as well and it is up to AWS to create a desired state as defined in a CloudFormation template. Template is JSON/YAML/text file which can be accompanied with parameters to fill in template values (template can contain default values for parameters).

A template can be built directly by editing a file. However, what if resources were created in AWS Console and one would like save a current state and share it over Slack as file or check it in to git to have versioning ?

CloudFormer can be of help and while it is still in Beta (at least since end of 2018) meaning it should not be used for critical/prod enviroments, it is ideal for demo/PoC/dev purposes:








EFK (Elasticsearch-Fluentd-Kibana) in kubeadm-dind-cluster

When you wish to have your logs sent to Elasticsearch and browse them with Kibana you should add to your k8s cluster EFK stack.

My first attempt to install EFK was with https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

However it was fluentd was failing to send logs to Elasticsearch.

Fortunately on digitalocean website there is a blog post describing on how to do it and it happens to be working on a dind cluster. https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes

All the config posted over there is valid except for few caveats related to how dind stores logs.  To make fluentd pods have access to logs in fluentd.yaml add in volumeMounts:

And in volumes:

Also in elasticsearch-statefulset.yaml decrease required storage space and match storageClassName to the one that exists in the cluster in volumeClaimTemplates:

Here local-path provisioner has been used and storage is set to a low value like 1Gi.

Local-path provisioner has been added to the cluster (https://github.com/rancher/local-path-provisioner)

Now EFK can be onboarded onto a cluster as follows:

Check if you can access elasticsearch and kibana:

After port-forward provide container name relevant to your cluster. Check access on a host running kubectl port-forward

Now your system logs (not related to user pods, like kube-apiserver) should be fed by fluentd to elasticsearch and kibana can read them for es backend. You can have workload pods logs logs flowing into elasticsearch or build a dedicated instance of EFK just for that purpose to segregate system vs workload logging.


kubeadm-dind-cluster or kind

Even though it was recently retired in favour of kind (https://kind.sigs.k8s.io/kubeadm-dind (https://github.com/kubernetes-retired/kubeadm-dind-cluster) is still a great way to run a kubernetes cluster locally on laptop (in my case it is MacBook). It’s main limitation is that it supports v1.14.1, while kind supports v1.15.3 (though yesterday v1.16 version got released). Both rely on the same principle that nodes are not baremetal nodes nor VM, they are plain docker containers.  Then kubeadm is used to deploy k8s clusters on docker-based k8s cluster. System pods as well as user pods run as embedded containers inside docker container hosts (hence dind = docker-in-docker)

Yet with kubeadm-dind-cluster you can easily switch between CNI plugins, and it works between Docker for Desktop restarts and OS reboots.

Switching between CNI plugins is as easy as modifying inside an install script the following variable:

Main issue with kind is that OS reboot (not necessarily docker restart) causes originally assigned IPs to docker containers acting as k8s nodes to be changed. Hence after creating a cluster, making some configuration changes, or deploying pods etc we are forced to redo the job. Root cause of the problem is that docker doesn’t maintain originally assigned IPs to containers across docker restarts/OS reboots – at least it is not forced to do it (https://github.com/moby/moby/issues/2801). It is possible to assign static IPs to containers when starting containers with docker run. Unfortunately kind does not offer such option (for now). Issue is tracked at: https://github.com/kubernetes-sigs/kind/issues/148

All in all, when you want to practise with k8s (multi-node) on your laptop kubeadm-dind-cluster. kind is a very near future when it is made stable across docker restarts. 

Terraform or Pulumi for IaC

Terraform by Hashicorp is a pioneer in the area of IaC tooling which allows to define infrastructure in a declarative manner by using its own language (HCL – Hashicorp Configuration Language). That means:

  • it takes definition of the infrastructure (simplistic example could be a couple of VMs running in AWS or OpenStack) found in the config file (can be a single .tf file) defined in HCL
  • connects to the IaaS by using an appropriate provider – plugin/extension that allows to talk to AWS/Openstack etc.
  • compares current state of the infrastructure against what is found in the definition file (.tf)
  • tries to meet the desired state found definition file (using example below it means to deploy VMs)

Approach as above allows to have a deterministic state of your infrastructure and deployment is performed by calling API of the infrastructure provider rather than manually or using different methods already offered by them (e.g. HEAT templates for Openstack, python client vs CloudFormation for AWS, awscli tool). If a VM is deleted for any reason on the IaaS re-running the terraform apply (command to check current state of the infra, comparing against the definition file and trying to meet the desired state from the file) will just deploy the missing VM. 

It allows to have a common language for defining the infrastructure on different infrastructure providers (bearing in mind that each of them use basic different building blocks to create the infrastructure e.g. aws_subnet resource vs openstack_networking_subnet_v2 resource). 

Terraform allows for more than just creating resources on public/private cloud providers, it can deploy on kubernetes (more of a PaaS than IaaS), or interact with RESTAPI. People/companies create their custom providers.

However, anyone with some development background will see that that HCL language is limited – even writing a simple loop, while possible – it is somehow done indirectly (I admit I did not have time to check 0.12 option which was supposed to add some typical programming constructs like for loop). Immediately the feeling is that it should be done differently – you want to write a code and have appropriate programming libraries to wrap API calls to different providers. This is where Pulumi comes in – its tenets are the same:

  • it is declarative
  • it compares the current state against the definition (defined in the programming language of choice – at the moment of writing JS/TS/Python/Golang are supported)
  • uses programming language libraries to connect to IaaS (e.g. pulumi_gcp in Python) which extend core pulumi framework
  • tries to meet the desired state (pulumi up vs terraform apply)

What is new is that it’s SaaS by default and requires to setup an account on Pulumi website. Pulumi service keeps state on Amazon S3, but there is also an option to keep it locally (different backends). If it is kept on Pulumi then you can check your created infrastructure resources via their website (actually this is very cool, you can even click to connect to created VMs consoles’).

Rather than having a separate language to define the infrastructure you just import appropriate libraries and start defining the infrastructure directly in code. There is no better source of introduction to Pulumi than Joe Duffy’s blog post: http://joeduffyblog.com/2018/06/18/hello-pulumi/.

So which one should you choose? I would say it depends on the people who are going to use it:

  • Developers, people with development skills should opt for Pulumi so as not to be constrained by HCL. That also means learning curve is steeper for Pulumi.
  • People without such skills will feel more comfortable with HCL and might not feel constrained by it. They should go with Terraform. I just felt I had to create a lot of boilerplate with it.
  • Terraform is more mature (older tool) but Pulumi’s people are quite helpful and available on Slack so it is just kicking off.

I have posted few examples with both tools in my gitlab repo:



Juniper vMX – MPLS over lt interface

I was wondering if I can run MPLS over OVS, VXLAN tenant networks in Openstack (for example VLAN tagging won’t work in such case, not by default at least, didn’t have an option to test with vlan_transparent setting on network level). Wanted to quickly check it out between a pair of vMXes with static LSPs remembering I can run logical-systems or routing-instances interconnected between themselves with tunnel interface (lt). That works just fine but only with logical-systems.

LS-vmx01 lt-0/0/0.1<–>lt-0/0/0.2 vmx01 ge-0/0/2.0<–>ge-0/0/2.0 vmx02 lt-0/0/0.2<—lt-0/0/0.1 LS-vmx02

vmx01 cfg:

vmx02 cfg:

Main thing here was to set family mpls on lt-0/0/0 units and add lt-0/0/0.2 interface to protocols mpls.

Only after that POP entries appeared in routing and forwarding tables:

vSphere ESXi access without vSphere client

Today I stumbled on quite irritating problem. If you are OSX or Linux user and your ESXi setup doesn’t have vCenter (which comes with a price) then you are:

  1. Doomed to run a VM with Windows and install a vSphere client there (it requires Windows and .NET)
  2. Lose your time over trying to run it using Wine (there are instances where where with specific wine version and vSphere client it could work – I didn’t manage) or Crossover (if on Mac)
  3. Install VMware Fusion (if on Mac) – but it is a paid solution, on the other hand you might already have it – it is capable of connecting to vCenter and ESXi hosts.
  4. Install ESXI host client UI https://labs.vmware.com/flings/esxi-embedded-host-client

Following assumes there is internet connectivity from your ESXi host (but that’s not a big deal if there is not, just SCP to upload vib package), so connect to your ESXi host over SSH and then:

Now you can access your ESXi host over UI: https://<esxi_host_ip/ui/