OpenFaaS playground

I was playing around with OpenFaaS, and needed an, local, environment for it.

Install Ubuntu 20.04 Server in a virtual machine.

Install docker:

sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo usermod -a -G docker $USER

Install arkade and kubectl:

curl -sLS https://dl.get-arkade.dev | sudo sh
arkade get kubectl
echo "export PATH=\$PATH:\$HOME/.arkade/bin" >> ~/.bashrc
. ~/.bashrc

Download the latest version of minikube and start a new kubernetes cluster with docker as “backend”:

sudo dpkg -i minikube_latest_amd64.deb
minikube start --driver=docker

Deploy a private docker registry (domain docker-registry), which should be accessible from the host (ubuntu server), minikube (docker) and the namespace where OpenFaas function is being deployed:

arkade install docker-registry

In the output you’ll see the password for the admin user, we need it later on so make sure to save it. We also need to be able to access the registry “externally” (outside of the kubernetes cluster) with a self-signed certificate:

export REGISTRY_PASSWORD=<password from output above>
kubectl expose deploy/docker-registry --type=NodePort --name=docker-registry-external --port=5000
minikube addons enable ingress
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    -out docker-registry-ingress-tls.crt \
    -keyout docker-registry-ingress-tls.key \
    -subj "/CN=docker-registry/O=docker-registry-ingress-tls"
kubectl create secret tls docker-registry-ingress-tls \
    --key docker-registry-ingress-tls.key \
    --cert docker-registry-ingress-tls.crt
cat <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-body-size: 2048m
    nginx.ingress.kubernetes.io/rewrite-target: /$1
  name: docker-registry-ingress
  namespace: default
spec:
  rules:
    - host: docker-registry
      http:
        paths:
          - path: /(.*)
            pathType: Prefix
            backend:
              service:
                name: docker-registry-external
                port:
                  number: 5000
  tls:
    - hosts:
      - docker-registry
      secretName: docker-registry-ingress-tls
EOF | kubectl apply -f -
sudo mkdir /usr/local/share/ca-certificates/docker-registry
sudo chmod 755 /usr/local/share/ca-certificates/docker-registry
sudo cp docker-registry-ingress-tls.crt /usr/local/share/ca-certificates/docker-registry
sudo chmod 644 /usr/local/share/ca-certificates/docker-registry/*
sudo update-ca-certificates
sudo mkdir -p /etc/docker/certs.d/docker-registry:443
sudo cp docker-registry-ingress-tls.crt /etc/docker/certs.d/docker-registry:443/ca.crt
scp -i $(minikube ssh-key) docker-registry-ingress-tls.crt docker@$(minikube ip):/home/docker
minikube ssh
sudo mkdir /usr/local/share/ca-certificates/docker-registry
sudo chmod 755 /usr/local/share/ca-certificates/docker-registry
sudo cp docker-registry-ingress-tls.crt /usr/local/share/ca-certificates/docker-registry
sudo chmod 644 /usr/local/share/ca-certificates/docker-registry/*
sudo update-ca-certificates
sudo mkdir -p /etc/docker/certs.d/docker-registry:443
sudo cp docker-registry-ingress-tls.crt /etc/docker/certs.d/docker-registry:443/ca.crt
sudo kill -SIGHUP $(pidof dockerd)
sudo apt update && sudo apt install -y vim-tiny
sudo vim.tiny /etc/hosts # add docker-registry after minikube
exit

Install and deploy OpenFaaS and the command line tool, and login to the OpenFaas gateway:

arkade install faas-cli
arkade install openfaas
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin

The deployed functions need authentication for the private docker registry:

kubectl create secret -n openfaas-fn docker-registry docker-registry-credentials --docker-server=docker-registry:443 --docker-username=admin --docker-password=$REGISTRY_PASSWORD --docker-email=docker@example.com
kubectl edit serviceaccount default -n openfaas-fn

In the editor, add the following lines:

imagePullSecrets:
- name: docker-registry-credentials

Create a docker, client, configuration file with the basic authentication for the private docker registry:

mkdir ~/.docker/
cat > ~/.docker/config.json <<EOF
{
        "auths": {
                "docker-registry:443": {
                        "auth": "$(echo -n "admin:$REGISTRY_PASSWORD" | base64)"
                }
        }
}
EOF 

That should be it.

HP DisplayLink docking station in Ubuntu

DisplayLink docking stations works really good in Ubuntu. A list of supported devices can be found here.

I had the possibility to try out with one from HP, and there was some additional steps except from installing the DisplayLink driver for Ubuntu that was necessary to get a good experience (at least for me).

Start by downloading the deb package for Ubuntu here. Extract and install:

cd ~/Downloads
unzip DisplayLink\ USB\ Graphics\ Software\ for\ Ubuntu\ 1.2.1.zip
sudo bash displaylink-driver-1.2.65.run

It is easiest to just reboot your computer after the installation is done. I usually don’t plugin the docking station until I get to lightdm login screen.

You might have to adjust the order of the monitors under System settings, Display you have more than one external monitor connected.

Every time the HP docking station is connected, it will mount a USB mass storage device, containing the Windows drivers. This is quite annoying, but it’s easy to fix by blacklisting it with a udev rule. I also wanted to blacklist the Ethernet interface, since it isn’t used (and also caused problems NetworkManager dropping the wireless connections sometimes).

For the version of the docking station I was using, the following two rules would take care of that:

sudo bash -c 'tee /etc/udev/rules.d/98-displaylink-ignore.rules <<EOF
# Disable displaylink (port replicator) ethernet device
SUBSYSTEM=="usb", DRIVER=="cdc_ncm", ATTRS{interface}=="HP USB Giga Ethernet", \
ATTR{authorized}="0"

# Disable displaylink (port replicator) usb disk
SUBSYSTEM=="usb", ATTRS{idProduct}=="1165", ATTRS{idVendor}=="048d", \
ATTRS{manufacturer}=="iTE Tech", ATTR{authorized}="0"
EOF'

As always, udevadm info -a -p and the corresponding sysfs class path for the device, is the way to find correct information for your particular device.

Reload the udev rules without restarting:

sudo udevadm control --reload-rules

Build i3-gaps in Docker

Automated way

So, the very automated way:

git clone git@github.com:mgor/docker-ubuntu-i3-gaps-builder.git
cd docker-ubuntu-i3-gaps-builder/
make

Packages available in packages/.

Build environment

First, get the build environment and start it:

git clone git@github.com:mgor/docker-ubuntu-pkg-builder.git
cd docker-ubuntu-pkg-builder
make

Dependencies

Install the needed dependencies:

apt update
apt install libxcb1-dev libxcb-keysyms1-dev \
libpango1.0-dev libxcb-util0-dev libxcb-icccm4-dev \
libyajl-dev libstartup-notification0-dev \
libxcb-randr0-dev libev-dev libxcb-cursor-dev \
libxcb-xinerama0-dev libxcb-xkb-dev libxkbcommon-dev \
libxkbcommon-x11-dev
apt-get build-dep i3

Build

Get i3-gaps from github[0].

git clone https://www.github.com/Airblader/i3 i3-gaps
cd i3-gaps

If you want to run on the stable branch:

git checkout gaps
git pull

Build the packages:

debuild -i -us -uc -b

If successful, the packages will be in ../. Transfer them to your host and install.

Run production WordPress site in docker for development

I have a couple of WordPress sites that I wanted to create local development environments in docker for, here are some tips on how to get it to work.

I use the official MySQL and WordPress docker images. The directory structure is as follows:

dev-env.sh:

update-development-site.sh:

production_dump.sql is a MySQL dump of the production database, add a “use wordpress-site;” statement in the beginning so that the backup is imported into the correct database.