Using F5 Load Balancer as a Kubernetes Ingress

The F5 BigIP can be setup as a native Kubernetes Ingress Controller to integrate exposed services with the flexibility and agility of the F5 platform. This allows for the security integration as well depending on licensing, such as the ASM - Application Security Module otherwise known as a WAF - Web Application Firewall.

Using the F5 controller allows integration into on-premises and cloud environments. Though realistically, this will probably be used for on-premises deployments 95% of the time, either on bare metal or virtualized workloads.

Table of Contents

  1. Table of Contents
    1. Cluster Mode vs NodePort Mode
    2. NodePort Mode
    3. Cluster Mode
      1. Cluster Mode - Networking
    4. Using Calico and Flannel (Canal)
    5. Install Calico Policy (optional)
    6. Install Flannel Networking
    7. Setting up VXLAN on the F5
    8. Deploy the F5 “Dummy Node”
    9. Verify VXLAN Tunnels
    10. Deploy the Ingress Controller
    11. Deploy Application
    12. Checking Virtual Servers
    13. Pretty Pictures
    14. DONE

Cluster Mode vs NodePort Mode

The F5 Container Connector or ingress controller has two methods of operation.

  1. NodePort mode
  2. Cluster mode

NodePort Mode

In this method of operation from a logical perspective, this is what the F5 BigIP looks like in the network.

F5 NodePort Mode

The BigIP sits outside the cluster network and and has no visibility into individual pods. This has some dissadvantages.

  • Session stickyness becomes unpredictable
  • Otherwise known as Layer-7 persistance

This is because the Kubernetes Service must be configured as NodePort and the F5 will send traffic to the Node and it's exposed port. Then the kube proxy will do the internal load-balancing.

  • Latency is added to the mix by sending traffic to the node, then having the kube-proxy distribute the traffic.

If you want to to configure as NodePort Mode or have limitations that you have no other choice. Follow these instructions on the F5 official documentation, we’re going to concentrate on Cluster Mode in this post.

Cluster Mode

In cluster mode, the BigIP becomes part of of the Kubernetes Cluster Network. Meaning, there’s direct access to Pod networking as such.

F5 Cluster Mode

This is the recommended integration as it has advantages and predictability.

  • Any Service type can be used
    • NodePort
    • ClusterIP (recommended)
  • Layer-7 persistance behaves as intended
  • BigIP load-balances directly to Pods in the network

Cluster Mode - Networking

In cluster mode, there are two ways of integration.

  1. VXLAN using Flannel
  2. Layer-3 using BGP and Calico

For this guide I’ll be using Flannel for networking and utilizing VXLAN integration.

Using Calico and Flannel (Canal)

The important piece here is using Flannel for networking and not so much Calico, I just use that for network policy as I like the integration. This implementation is known as Canal.

The default --pod-cidr-network for Kubernetes using flannel is 10.244.0.0/16, this hasn’t been modified for this deployment. Though you can manually download the yaml file and update it, I’ve confirmed it works in previous testing. I left it untouched for this to simplify the explanation and if you read the docs and there are references to the default CIDR, just avoids confusion.

The below instructions for getting the network setup assume a clean or brand new cluster. The CNI should be installed and enabled prior to adding any worker nodes.

Install Calico Policy (optional)

If RBAC is enabled on your cluster (recommended) apply the below

kubectl apply -f \
https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml

Create the Calico Canal for network policy

kubectl apply -f \
https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

This will load Calico on your cluster only for it’s network policy integration.

RTFM

Install Flannel Networking

The default state of Flannel is VXLAN mode, no need to touch the yaml unless you require customization such as the Pod CIDR network.

kubectl apply -f \
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Once you apply the commands you can verify all is working correctly. The master should be in Ready state within a few moments.

•100% [I] ➜ kubectl get nodes                                                o
NAME            STATUS     ROLES     AGE       VERSION
k8s-master-1a   Ready      master    2d        v1.11.1

Verify flannel and if loaded the calico pods are working as well.

•100% [I] ➜ kubectl get -n kube-system pods | egrep --color "canal|flannel"
canal-7tjtl                                  3/3       Running   0          2d
kube-flannel-ds-amd64-mtkx6                  1/1       Running   0          2d

You may want to verify CoreDNS pods are also running.

•100% [I] ➜ kubectl get -n kube-system pods | grep core
coredns-78fcdf6894-ghltq                     1/1       Running   0          2d
coredns-78fcdf6894-qrk6j                     1/1       Running   0          2d

Once these steps are completed, follow the installation procedure you’re using to setup the nodes. If using minikube, I’m not sure this will work as this environment is setup in an ESXi 6.5 cluster running one master and three worker nodes.

Setting up VXLAN on the F5

The next few sections (steps) can really be performed in any order, as when learning kicks in eventually services will discover each other. However, I found this order to provide the most predictable of outcomes. Also I felt like writting it in this order, so deal with it 😜.

Lets get to it, shall we.

# ssh to f5
ssh admin@[f5-bigip-address]
cd /
# create partition
create auth partition [name-your-partition] # I named mine k8s-controller

# create vxlan profile
cd k8s-controller
create /net tunnels vxlan fl-vxlan port 8472 flooding-type none

Looking at the output above, we’ve accomplished three things, two of them being actions.

  1. ssh to F5 device
  2. Create partition for BigIP controller
    1. this is necessary, F5 cannot control common partition
  3. Create vxlan profile
    1. utilize port 8472 (default for flannel)
    2. flooding type none
    3. name the tunnel fl-vxlan

Now we run the following command(s)

# create VTEP
create /net tunnels tunnel flannel_vxlan key 1 profile fl-vxlan local-address 172.16.30.3

# create self IP inside POD network range (must not be taken by other node)
create /net self 10.244.255.1/16 allow-service none vlan flannel_vxlan

# create floating IP inside this network
create /net self 10.244.255.2/16 allow-service none traffic-group traffic-group-1 vlan flannel_vxlan
  1. Create a VXLAN tunnel endpoint
    1. set the local-address to an IP address from the network that will support the VXLAN overlay
      1. in our case (see diagram above) this is inside the 172.16.30.0/24 subnet
      2. the BigIP device has a self IP of 172.16.30.2 and a
      3. floating IP of 172.16.30.3 which we use for the VTEP
    2. set the key to 1 this sets the VNI to 1
      1. default for flannel (this can be changed)
  2. Identify the flannel subnet you want to assign to the BIG-IP system
    1. make sure it doesn’t overlap with a subnet that’s already in use by existing Nodes in the Kubernetes Cluster
    2. this subnet will be used to create a dummy node (necessary to communicate with BigIP)
    3. Create a self IP using an address from the subnet you want to assign to the BIG-IP device.
    4. the self IP range must fall within the cluster subnet mask
    5. create a floating IP address in the flannel subnet you assigned to the BIG-IP device.

For the VXLAN participation POD subnet, I picked the last /24 available in the 10.244.0.0/16 as Kubernetes tends to assign node subnets in order.

Done here? Good! Lets move along now…

Deploy the F5 “Dummy Node”

Next we need to deploy a dummy node into our Kubernetes cluster, not sure why, even the F5 docs are vague. This is the only way the BigIP can become part of the cluster and insert itself into the VXLAN overlay.

This dummy node will always have a state of NotReady which is normal. It takes about 3 minutes counted non-sciency way.

To deploy apply this command.

kubectl apply -f \
https://raw.githubusercontent.com/IPyandy/k8s-f5-ingress-samples/master/cluster-mode/f5-k8s-node/00-f5-node.yaml

If you’re the curious type, this is what the yaml looks like.

apiVersion: v1
kind: Node
metadata:
  name: bigip
  annotations:
    # Provide the MAC address of the BIG-IP VXLAN tunnel
    flannel.alpha.coreos.com/backend-data: '{"VtepMAC":"00:0c:29:52:7e:67"}'
    flannel.alpha.coreos.com/backend-type: vxlan
    flannel.alpha.coreos.com/kube-subnet-manager: 'true'
    # Provide the IP address you assigned as the BIG-IP VTEP
    flannel.alpha.coreos.com/public-ip: 172.16.30.3
spec:
  # Define the flannel subnet you want to assign to the BIG-IP device.
  # Be sure this subnet does not collide with any other Nodes' subnets.
  podCIDR: 10.244.255.0/24

The VtepMAC is found by running the below command in the appropriate partition, the one you created the tunnel above.

show net tunnels tunnel flannel_vxlan all-properties

-------------------------------------------------
Net::Tunnel: flannel_vxlan
-------------------------------------------------
MAC Address                     `00:0c:29:52:7e:67`
Interface Name                      flannel_vxlan

Incoming Discard Packets                        0
Incoming Error Packets                          0
Incoming Unknown Proto Packets                  0
Outgoing Discard Packets                        0
Outgoing Error Packets                          0
HC Incoming Octets                           4.8G
HC Incoming Unicast Packets                  7.0M
HC Incoming Multicast Packets                   0
HC Incoming Broadcast Packets                   0
HC Outgoing Octets                           1.1G
HC Outgoing Unicast Packets                  7.0M
HC Outgoing Multicast Packets                   0
HC Outgoing Broadcast Packets                   0

The other assigns the node as subnet manager for given CIDR range. The next annotation flannel.alpha.coreos.com/public-ip is the ip address of the VTEP endpoint we assigned to the F5 BigIP system.

The podCIDR spec option uses the same flannel_vxlan tunnel we assigned above. In this case, we give it a /24 instead of the full /16. The first statements gives is a routing table entry, meaning we need to know where the entire range exists. The podCIDR spec says which part of that CIDR range we own.

The documentation states that all ingress “service” addresses must be part of that podCIDR range. Though I have found that external IPs also work. For example, in our diagram, the 172.30.0.0/24 range is what I use for F5 virtual-servers, and using a service address in that range works just fine.

The next two commands leave as is, one specifies the backend type, in this case we want VXLAN.

Once the apply the yaml file with the kubectl command, we’ll see the node show up in our node table.

github.com/IPyandy/Kubernetes on  master
•100% [I] ➜ kubectl get nodes
NAME            STATUS     ROLES     AGE       VERSION
bigip           NotReady   <none>    6d
k8s-master-1a   Ready      master    6d        v1.11.1
k8s-node-1a     Ready      <none>    6d        v1.11.1
k8s-node-2a     Ready      <none>    6d        v1.11.1
k8s-node-3a     Ready      <none>    6d        v1.11.1

Once the node is running, from a logical perspective the environment looks as below.

F5 Cluster Mode with Node

Verify VXLAN Tunnels

One thing to note, at least on my setup is that dynamic learning of FDB table for the F5 device does not work. I have a feeling this is due to limitations in the VMware networking stack. If you were to try and show the forwarding table for the tunnel endpoints at this stage, you’ll get something blank as below.

root@(f5-bigip-ve)(cfg-sync Standalone)(Active)(/k8s-controller)(tmos)# show net fdb tunnel flannel_vxlan flannel_vxlan

----------------------------------------------------------------
Net::FDB
Tunnel         Mac Address        Member                 Dynamic
----------------------------------------------------------------

This will prevent the F5 BigIP from forwarding traffic and you’ll spend more time than you need troubleshooting this.

Don’t be like me, just enter the static routes below if you’re working on your own VMware environment.

To fix this simply add static entries for node endpoints (with more time I’ll test on baremetal) on the BigIP device and the correct partition, for me this is k8s-controller.

modify net fdb tunnel flannel_vxlan records add { 46:03:26:d0:df:b8 { endpoint 172.16.30.11 } }
modify net fdb tunnel flannel_vxlan records add { d2:09:ce:e3:e4:75 { endpoint 172.16.30.21 } }
modify net fdb tunnel flannel_vxlan records add { ee:ef:e3:c5:c1:d3 { endpoint 172.16.30.22 } }
modify net fdb tunnel flannel_vxlan records add { be:4d:65:46:78:9e { endpoint 172.16.30.23 } }

To find the mac-addresses for the VTEP on the node perspective, simply run any of the three commands below.

  • ip -4 addr show
  • ip -4 -d link show flannel.1
  • bridge fdb show dev flannel.1

Here’s an example on two different nodes running the bridge fdb show command, it’s more efficient.

yandy@k8s-node-1a:~$ bridge fdb show dev flannel.1
be:4d:65:46:78:9e dst 172.16.30.23 self permanent
00:0c:29:52:7e:67 dst 172.16.30.3 self permanent
ee:ef:e3:c5:c1:d3 dst 172.16.30.22 self permanent
46:03:26:d0:df:b8 dst 172.16.30.11 self permanent

Then run ip -4 addr show to find the local mac-address of the VTEP.

yandy@k8s-node-1a:~$ ip -4 -d link show flannel.1
506: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/ether `d2:09:ce:e3:e4:75` brd ff:ff:ff:ff:ff:ff promiscuity 0
    vxlan id 1 local 172.16.30.21 dev ens160 srcport 0 0 dstport 8472 nolearning ageing 300 addrgenmode eui64

In the link/ether line you’ll find VTEP mac for this node, also below the ip address associated with the node. This is necessary as the first command only shows dynamically learned addresses.

Deploy the Ingress Controller

Now we’re ready to deploy the ingress controller (finally right!!, I know). There are four things we need to do.

  1. Deploy our secret (access to configure the BigIP)
  2. Create a service account
  3. Create a ClusterRole and ClusterRoleBinding
  4. Create our deployment
    1. which has to have exactly one replica
    2. more will cause issues

Now let’s create the secret we need

kubectl apply -f \
https://raw.githubusercontent.com/IPyandy/k8s-f5-ingress-samples/master/cluster-mode/01-secret.yaml

The contents are very straight forward.

apiVersion: v1
data:
  password: dGgxczFzQGxAYg==
  username: azhzLWFkbWlu
kind: Secret
metadata:
  name: bigip-ctlr-secret
  namespace: kube-system

If you’re curious as to what the username and password actually are, just run.

# decode password
echo "dGgxczFzQGxAYg==" | base64 --decode
# decode username
echo "azhzLWFkbWlu" | base64 --decode

I don’t really care if you know, it’s lab password and has already been changed, but please don’t do this in production, create real passwords and don’t put them up on public github repos. Okay?

Don’t be that guy or gal.

The next few steps we’re going to apply blindly, if you want to know, checkout my repository on github or the official docs.

kubectl apply -f \
https://raw.githubusercontent.com/IPyandy/k8s-f5-ingress-samples/master/cluster-mode/02-serviceacct.yaml
kubectl apply -f \
https://raw.githubusercontent.com/IPyandy/k8s-f5-ingress-samples/master/cluster-mode/03-cluster-role.yaml
kubectl apply -f \
https://raw.githubusercontent.com/IPyandy/k8s-f5-ingress-samples/master/cluster-mode/04-deployment.yaml

So I lied a bit, we’ll need to dissect the deployment yaml a bit, I know, I know, too bad.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-bigip-ctlr-deployment
  namespace: kube-system
spec:
  # DO NOT INCREASE REPLICA COUNT
  replicas: 1
  template:
    metadata:
      name: k8s-bigip-ctlr
      labels:
        app: k8s-bigip-ctlr
    spec:
      # Name of the Service Account bound to a Cluster Role with the required
      # permissions
      serviceAccountName: bigip-ctlr
      containers:
        - name: k8s-bigip-ctlr
          image: 'f5networks/k8s-bigip-ctlr'
          env:
            - name: BIGIP_USERNAME
              valueFrom:
                secretKeyRef:
                  # Replace with the name of the Secret containing your login
                  # credentials
                  name: bigip-ctlr-secret
                  key: username
            - name: BIGIP_PASSWORD
              valueFrom:
                secretKeyRef:
                  # Replace with the name of the Secret containing your login
                  # credentials
                  name: bigip-ctlr-secret
                  key: password
          command: ['/app/bin/k8s-bigip-ctlr']
          args: [
              # See the k8s-bigip-ctlr documentation for information about
              # all config options
              # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest
              '--bigip-username=$(BIGIP_USERNAME)',
              '--bigip-password=$(BIGIP_PASSWORD)',
              '--bigip-url=172.16.30.3', # CHANGE THIS TO YOUR OWN
              '--bigip-partition=k8s-controller',
              '--pool-member-type=cluster',
              '--flannel-name=flannel_vxlan',
              '--namespace=default',
            ]
      imagePullSecrets:
        # Secret that gives access to a private docker registry
        # - name: f5-docker-images
        # Secret containing the BIG-IP system login credentials
        - name: bigip-ctlr-secret

In the yaml we make reference to our secret for our arguments to the container configuration. Make sure if you change the secret name, it matches in the reference. If you’re changin names, I’m assuming you understand how this works, this is not a primer on kubernetes.

Also make sure the --bigip-partition matches the one you created and the --bigip-url matches your configuration.

Deploy Application

Wow, that was allot of work to get a simple ingress controller working! I have three word for you, automate the things.

Lets deploy our application, because this has been long and you’re probably tired (not me though), I’m just going to use an off-the shelf ghost container. No, not this 👻 type of ghost, the blogging platform.

Keep in mind, this is not production ready, it is not using any type of database for persistence. There are plenty articles on making this production ready, LMGTFY.

kubectl apply -f \
https://raw.githubusercontent.com/IPyandy/k8s-f5-ingress-samples/master/cluster-mode/example-ingress/02-ghost.yaml

There are a couple variables you may want to change, as your environment probably looks nothing like mine. All these are within the Ingress declaration, the rest can stay the same as it’s just for testing.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ghost-f5
  annotations:
    # See the k8s-bigip-ctlr documentation for information about
    # all Ingress Annotations
    # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest/#supported-ingress-annotations
    virtual-server.f5.com/ip: '172.16.0.50'
    virtual-server.f5.com/http-port: '80'
    virtual-server.f5.com/partition: 'k8s-controller'
    virtual-server.f5.com/balance: 'least-connections-node'
    kubernetes.io/ingress.class: 'f5'
    # Annotations below are optional
    #  virtual-server.f5.com/balance:
    #  virtual-server.f5.com/http-port:
    #  virtual-server.f5.com/https-port:
    #  ingress.kubernetes.io/allow-http:
    #  ingress.kubernetes.io/ssl-redirect:
    virtual-server.f5.com/health: |
      [
        {
          "path":     "your-domain.here.come/",
          "send":     "GET /\\r\\n",
          "interval": 5,
          "timeout":  10
        }
      ]
spec:
  rules:
    - host: your-domain.here.com # CHANGE ME!!!
      http:
        paths:
          - path: /
            backend:
              serviceName: ghost-f5
              servicePort: 80
  1. Change the virtual-server.f5.com/health annotation path to match your test domain
  2. Change your - host: ingress spec rule to match the same domain
  3. The virtual-server.f5.com/ip also needs to be changed to match your DNS record of the same domain
  • Check the pods, service and ingress were created
github.com/IPyandy/Kubernetes on  master
•100% [I] ➜kubectl get pods,svc,ingress -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP             NODE
pod/ghost-f5-5c9ffc66c7-fjhqq   2/2       Running   0          40m       10.244.1.124   k8s-node-1a
pod/ghost-f5-5c9ffc66c7-kfhpg   2/2       Running   0          40m       10.244.3.105   k8s-node-3a
pod/ghost-f5-5c9ffc66c7-vdgv8   2/2       Running   0          40m       10.244.2.119   k8s-node-2a

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE       SELECTOR
service/ghost-f5     ClusterIP   10.107.171.212   <none>        80/TCP    40m       run=ghost-f5
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   6d        <none>

NAME                          HOSTS                  ADDRESS       PORTS     AGE
ingress.extensions/ghost-f5   ghost-f5.ipyandy.com   172.16.0.50   80        40m

Normally these only have one container in the pod, the reason you see in READY 2/2 is because I have istio installed and injection is enabled. Don’t worry about it too much, that’s another post or 20 in itself.

That’s it, the rest can be left alone and you can apply the yaml.

Open a web browser and visit the page you setup or use curl to test.

github.com/IPyandy/Kubernetes on  master
•100% [I] ➜ curl -o /dev/null -s -w "%{http_code}\n" http://ghost-f5.ipyandy.com
200

If you get a status code of 200 you know it’s working. The ghost containers take a little while to become ready, so give it about 2 non-sciency minutes.

Checking Virtual Servers

You can also verify the virtual servers, pools and nodes were created in the BigIP. This can be done via the GUI (not me) or the CLI, I’ll show the CLI it’s just quicker for this.

  • Verify the virtual server creation
root@(f5-bigip-ve)(cfg-sync Standalone)(Active)(/k8s-controller)(tmos)# show ltm virtual ingress_172-16-0-50_80

------------------------------------------------------------------
Ltm::Virtual Server: ingress_172-16-0-50_80
------------------------------------------------------------------
Status
  Availability     : unknown
  State            : enabled
  Reason           : The children pool member(s) either don't have service checking enabled, or service check results are not available yet
  CMP              : enabled
  CMP Mode         : all-cpus
  Destination      : 172.16.0.50:80

Traffic                             ClientSide  Ephemeral  General
  Bits In                                55.6K          0        -
  Bits Out                              309.6K          0        -
  Packets In                                71          0        -
  Packets Out                               66          0        -
  Current Connections                        0          0        -
  Maximum Connections                        4          0        -
  Total Connections                          5          0        -
  Evicted Connections                        0          0        -
  Slow Connections Killed                    0          0        -
  Min Conn Duration/msec                     -          -    10.0K
  Max Conn Duration/msec                     -          -    50.1K
  Mean Conn Duration/msec                    -          -    35.7K
  Total Requests                             -          -        8

...
  some output removed
...
  • Things to look for here are Destination needs to match the IP given to the Ingress
  • The state must be enabled
  • There’s no checking on the virtual-server itself, it’s ok for the Availability to be unknown

  • Verify the nodes in the pool

root@(f5-bigip-ve)(cfg-sync Standalone)(Active)(/k8s-controller)(tmos)# show ltm node

------------------------------------------
Ltm::Node: 10.244.3.105%0 (10.244.3.105)
------------------------------------------
Status
  Availability   : unknown
  State          : enabled
  Reason         : Node address does not have service checking enabled

...
  some output removed
...

------------------------------------------
Ltm::Node: 10.244.1.124%0 (10.244.1.124)
------------------------------------------
Status
  Availability   : unknown
  State          : enabled
  Reason         : Node address does not have service checking enabled

...
  some output removed
...

------------------------------------------
Ltm::Node: 10.244.2.119%0 (10.244.2.119)
------------------------------------------
Status
  Availability   : unknown
  State          : enabled
  Reason         : Node address does not have service checking enabled

...
  some output removed
...

This is much in the same, make sure the number of nodes match the number of pods in the deployment. There’s no state checking for the node itself, which is the same reason availability shows as unknown.

  • Check the node pool
root@(f5-bigip-ve)(cfg-sync Standalone)(Active)(/k8s-controller)(tmos)# show ltm pool ingress_default_ghost-f5

---------------------------------------------------------------------
Ltm::Pool: ingress_default_ghost-f5
---------------------------------------------------------------------
Status
  Availability : available
  State        : enabled
  Reason       : The pool is available
  Monitor      : ingress_default_ghost-f5_0_http
  Minimum Active Members : 0
  Current Active Members : 3
       Available Members : 3
       Total Members : 3
          Total Requests : 8
        Current Sessions : 0

Traffic                                  ServerSide
  Bits In                                     48.7K
  Bits Out                                   322.1K
  Packets In                                     57
  Packets Out                                    66
  Current Connections                             0
  Maximum Connections                             4
  Total Connections                               4

This is where the health check happens, and as you see Availability displays the correct state of available. If anything else would show, as unavailable make sure that the health check parameters in the Ingress declaration above matches your setup.

As you can see I have tons of traffic to this ghost thing…🧐

Pretty Pictures

For those that like the GUI, here are some pretty pictures of the above information.

  • Virtual Servers

Virtual Servers

  • Nodes

Nodes

  • Pool

Pool

DONE

That’s it, we’re done…

What were you expecting more? This wasn’t long enough for ya? Come back for some more topics.

I’m trying to put up a post once a week, though sometimes that becomes hard, I’m trying though as it’s my own way of sinking topics into my small(ish) brain.

Follow me below, anyone of them, though I’m most active on twitter and github

comments powered by Disqus