Scanning docker container images

Looking for potential vulnerabilities in docker images is crucial before shipping these to customers or putting them into production. Scanning of images should be a part of any CI pipeline so that it’s ensured that shipped software is secured as possible and security vulnerabilities and CVEs are detected early in the process.

Docker engine comes in with 

that runs on Snyk engine to detect CVEs and vulnerabilities. A result of the scan will show also potential remediations (e.g. use newer base image). 

More details on: https://docs.docker.com/engine/scan/

Another solution is trivy from Aqua Security that serves a similar purpose https://github.com/aquasecurity/trivy. Running it after installation is as simple as 

openstack cli in a docker container

If you need to access your openstack cluster but there is no option to install packages on a jumpbox host that can access the cluster (lack of internet access or privileges) an alternative is to build locally a docker image that includes openstack CLI utility. Assumption is that the jumpbox host has docker installed and user can load docker images and run docker containers.

Firstly prepare a Dockerfile (based off python docker official images in this example):

Build docker image and provide as an argument path to stackrc file that includes details on how to access the cluster (endpoint, passwords etc).

After build is finished save docker image to tarball:

Copy tarball to a destination machine which can access openstack cluster and load docker image:

Run the container – you’ll and try listing e.g. servers

kubeadm-dind-cluster or kind

Even though it was recently retired in favour of kind (https://kind.sigs.k8s.io/kubeadm-dind (https://github.com/kubernetes-retired/kubeadm-dind-cluster) is still a great way to run a kubernetes cluster locally on laptop (in my case it is MacBook). It’s main limitation is that it supports v1.14.1, while kind supports v1.15.3 (though yesterday v1.16 version got released). Both rely on the same principle that nodes are not baremetal nodes nor VM, they are plain docker containers.  Then kubeadm is used to deploy k8s clusters on docker-based k8s cluster. System pods as well as user pods run as embedded containers inside docker container hosts (hence dind = docker-in-docker)

Yet with kubeadm-dind-cluster you can easily switch between CNI plugins, and it works between Docker for Desktop restarts and OS reboots.

Switching between CNI plugins is as easy as modifying inside an install script the following variable:

Main issue with kind is that OS reboot (not necessarily docker restart) causes originally assigned IPs to docker containers acting as k8s nodes to be changed. Hence after creating a cluster, making some configuration changes, or deploying pods etc we are forced to redo the job. Root cause of the problem is that docker doesn’t maintain originally assigned IPs to containers across docker restarts/OS reboots – at least it is not forced to do it (https://github.com/moby/moby/issues/2801). It is possible to assign static IPs to containers when starting containers with docker run. Unfortunately kind does not offer such option (for now). Issue is tracked at: https://github.com/kubernetes-sigs/kind/issues/148

All in all, when you want to practise with k8s (multi-node) on your laptop kubeadm-dind-cluster. kind is a very near future when it is made stable across docker restarts.