Posts on Duffie Cooley https://mauilion.dev/posts/ Recent content in Posts on Duffie Cooley Hugo -- gohugo.io en-us This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Tue, 12 May 2020 21:09:00 -0700 Accessing Local Data from inside Kind! https://mauilion.dev/posts/kind-pvc-localdata/ Tue, 12 May 2020 21:09:00 -0700 https://mauilion.dev/posts/kind-pvc-localdata/ <p>Following on from the <a href="../kind-pvc">recent kind pvc</a> post. In this post we will explore how to bring up a kind cluster and use it to access data that you have locally on your machine via Persistent Volume Claims.</p> Following on from the recent kind pvc post. In this post we will explore how to bring up a kind cluster and use it to access data that you have locally on your machine via Persistent Volume Claims.

This gives us the ability to model pretty interesting deployments of applications that require access to a data pool!

Let’s get to it!

Summary

For this article I am going to use a txt file of a book and we can do some simple word counting.

For our book we are going to use The Project Gutenberg EBook of Pride and Prejudice, by Jane Austen

We are going to create a multi node kind cluster and access that txt file from pods running in our cluster!

Let’s make a directory locally that we will use to store our data

$ mkdir -p data/pride-and-prejudice
$ cd data/pride-and-prejudice/
$ curl -LO https://www.gutenberg.org/files/1342/1342-0.txt
$ wc -w 1342-0.txt
124707 data/pride-and-prejudice/1342-0.txt

Now for a kind config that mounts our data into our worker nodes!

kind-data.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role:  worker
  extraMounts:
  - hostPath: ./data
    containerPath: /tmp/data
- role:  worker
  extraMounts:
  - hostPath: ./data
    containerPath: /tmp/data

Let’s bring up the cluster!

Access Models

There are a couple of different ways we can provide access to this data! In Kubernetes we have the ability to configure the pod with access to hostPath

$ kubectl explain pod.spec.volumes.hostpath
KIND:     Pod
VERSION:  v1

RESOURCE: hostPath <Object>

DESCRIPTION:
     HostPath represents a pre-existing file or directory on the host machine
     that is directly exposed to the container. This is generally used for
     system agents or other privileged things that are allowed to see the host
     machine. Most containers will NOT need this. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

     Represents a host path mapped into a pod. Host path volumes do not support
     ownership management or SELinux relabeling.

FIELDS:
   path	<string> -required-
     Path of the directory on the host. If the path is a symlink, it will follow
     the link to the real path. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

   type	<string>
     Type for HostPath Volume Defaults to "" More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

For LOTS of good reasons this pattern is not a good one. Allowing hostPath as a volume for pods amounts to giving complete access to the underlying node.

A malicious or curious user of the cluster could mount the /var/run/docker.sock into their pod and have the ability to completely take over the underlying node. Since most nodes host workloads from many different applications this can compromise the security of your cluster pretty significantly!

All that said we will demonstrate how this works.

The other model is to provide access to the underlying hostPath as a defined persistent volume. This is better move because the person defining the pv has to have the ability to define the pv at the cluster level and requires elevated permissions.

Quick reminder here that persistent volumes are defined at cluster scope but persistent volume claims are namespaced!

If you are ever wondering what resources are namespaced and what aren’t check this out!

So TL;DR do this with Persistent Volumes not with hostPath!

The Setup!

I assume that you have already setup kind and all that comes with that.

I’ve made all the resources used in the following demonstrations available here

You can fetch them with

git clone https://gist.github.com/mauilion/c40b161822598e5b1720d3b34487fb82
pvc-books

And follow along!

hostPath

In this demo we will:

  • configure a deployment to use hostPath
  • bring up a pod and play with the data!
  • show why hostpath is crazy town!
  • cleanup

Persistent Volumes

In this demo we will:

  • define a Persistent Volume
  • configure a deployment and a persistent volume claim
  • bring up the deployment and play with the data!
  • cleanup

Persistent Volume Tricks!

Ever wondered how to ensure that a specific Persistent Volume will connect to a specific Persistent Volume Claim?

One of the most foolproof ways is to populate the claimRef with information that indicates where the pvc will be created.

We do this in our example pv.yaml

This way if you have multiple pvs you are “restoring” or “loading into a cluster” you can have some control over which pvc will attach to which pv.

Thanks!

In Closing

Giving a consumer hostpath access via Persistent Volume is very much a more sane way to provide that access!

  • They can’t arbitrarily change the path to something else.
  • Only someone with cluster level permission can define a Persistent Volume

Thanks for checking this out! I hope that it was helpful. If you have questions or ideas about something you’d like to see a post on hit me up on twitter!

]]>
Kind Persistent Volumes https://mauilion.dev/posts/kind-pvc/ Sun, 10 May 2020 14:50:57 -0700 https://mauilion.dev/posts/kind-pvc/ <p>Hey Frens! This week we are exploring portable persistent volumes in kind! This is a pretty neat and funky trick!</p> Hey Frens! This week we are exploring portable persistent volumes in kind! This is a pretty neat and funky trick!

Introduction

This article is going to explore three different ways to expose persistent volumes with kind

Use Cases

Assuming we are using a local kind cluster.

  1. default storage class: I want there to be a built in storage class so that I can deploy applications that request persistent volume claims.

  2. pod restart: If my pod restarts I want that pod to be scheduled such that the persistent volume claim is available to my pod. This ensures that if I have to restart and my pod will always come back with access to the same data.

  3. restore volumes: I want to be able to bring up a kind cluster and regain access to a previously provisioned persistent volume claim.

  4. volume mobility: I want to be able to schedule my pod to multiple nodes and have it access the same persistent volume claim. This requires that the peristent volume be made available to all nodes.

The built in storage provider

KinD makes use of Ranchers local path persistent storage solution.

With this provider we can solve for the first two use cases: default storage class and pod restart.

This solution is registered as the default storageclass on your kind cluster. You can see this by looking at:

kubectl get storageclass

This solution relies on a deployment of some resources in the local-path-storage namespace.

Now the way this storage solution works. When a pvc is created the persistent volume will be dynamically created on the node that the pod is scheduled to. As part of the provisioning the persistent volume has the following appended to it.

Spec: v1.PersistentVolumeSpec{
	PersistentVolumeReclaimPolicy: *opts.StorageClass.ReclaimPolicy,
	AccessModes:                   pvc.Spec.AccessModes,
	VolumeMode:                    &fs,
	Capacity: v1.ResourceList{
		v1.ResourceName(v1.ResourceStorage): pvc.Spec.Resources.Requests[v1.ResourceName(v1.ResourceStorage)],
	},
	PersistentVolumeSource: v1.PersistentVolumeSource{
		HostPath: &v1.HostPathVolumeSource{
			Path: path,
			Type: &hostPathType,
		},
	},
	NodeAffinity: &v1.VolumeNodeAffinity{
		Required: &v1.NodeSelector{
			NodeSelectorTerms: []v1.NodeSelectorTerm{
				{
					MatchExpressions: []v1.NodeSelectorRequirement{
						{
							Key:      KeyNode,
							Operator: v1.NodeSelectorOpIn,
							Values: []string{
								node.Name,
							},
						},
					},
				},
			},
		},
	},
},

source

This means that in the case of pod failure or restart the pod will only be scheduled to the node where the persistent volume was allocated. If that node is not available then the pod will not schedule.

For most use cases in Kind this solution will work great!

Let’s take a look at how this works in practice.

In this demonstration we will:

  • create a multi node kind cluster
  • schedule a pod with a pvc
  • evict the pod from the node it was scheduled to
  • see if the pod is rescheduled.
  • allow the pod to be scheduled on the original node.

What about “restore volumes” use case?

To support restoring volumes from previous kind cluster we need to do a couple of things. We need to mount the directory that the storage provider will use to create persistent volumes so that we have the data to restore. We also need to backup the persistent volume resources so that we can reuse them on restart!

The local-path-provisioner is configured via a configmap in the local-path-storage namespace. It looks looks like this!

$ kubectl describe configmaps -n local-path-storage local-path-config 

Name:         local-path-config
Namespace:    local-path-storage
Labels:       <none>
Annotations:  
Data
====
config.json:
----
{
        "nodePathMap":[
        {
                "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                "paths":["/var/local-path-provisioner"]
        }
        ]
}
Events:  <none>

This configuration means that on each node in the cluster the provisioner will use the /var/local-path-provisioner directory to provision new persistent volumes!

Let’s check that out.

In this demonstration we will:

  • bring up a multi node kind cluster with /var/local-path-provisioner mounted from the host
  • apply our sample pvc-test.yaml and create a deployment and pvc.
  • show that the persistent volume is in our shared directory
  • backup the persistent volume configuration
  • modify the persistent volume configuration
  • delete and recreate the kind cluster
  • restore the persistent volume configuration
  • redeploy the app and the pvc and show that the data has been restored.

The important bit there is that we needed to modify the old persistent volume manifest to change the retention policy to Retain or when we apply it. It will be immediately deleted.

We also kept the claim and node affinity information in the manifest.

One of the things we have not addressed is making sure the workload detaches from the storage before deleting the cluster! in some cases your data might be corrupted if you didn’t safely shut the app down before deleting the cluster!

Use Case “Volume Mobility”

For this we are going to use a different storage provider! Our intent is to still provide dynamic creation of pvcs but not to configure the pvcs with node affinity.

Fortunately there is an example implementation in the sigs.k8s.io repo! You can check it out here

For us to use this we need to build it and host it somewhere our kind cluster can access it. We also need a manifest that will deploy and configure it.

I’ve already built and pushed the container to mauilion/hostpath-provisioner:dev

The manifest I built for this example is below

Now to use this we are going to modify our kind cluster to override the shared mount and the “default storageClass” implementation that kind deploys.

Here is a look at our new kind config

Note that the mount path has changed and we are overriding the “/kind/manifests/default-storage.yaml” file on the first control-plane node. We are doing that because by default kind will apply that manifest to configure storage for the cluster.

Let’s see if it works!

We will:

  • fetch our kind-pvc-hostpath.yaml
  • bring up a multi node cluster with shared storage
  • deploy our example deployment and pvc with git.io/pvc-test.yaml
  • populate some data in the pvc.
  • Then we will drain the node and see the pod created on a different node.
  • show the pod rescheduled and that the data is still accessible
  • backup and modify the persistent volume
  • recreate the kind cluster
  • show that we can restore the persistent volume

Resources

I am using kind version v0.8.1

$ kind version
kind v0.8.1 go1.14.2 linux/amd64

I’ve made a simple deployment and pvc to play with. It’s available at git.io/pvc-test.yaml.

All of the other resources including the kind configurations can be found here

A quick way to set things up is to use git to check them all out!

git clone https://gist.github.com/mauilion/1b5727f42d181f36bb934656fa50459a  pvc
]]>
Using Kind to test a pr for Kubernetes. https://mauilion.dev/posts/kind-k8s-testing/ Wed, 08 May 2019 09:37:30 -0700 https://mauilion.dev/posts/kind-k8s-testing/ Setup I am looking to validate a set of changes produced by this PR. https://github.com/kubernetes/kubernetes/pull/77523 In this post I want to show a few things. setup a go environment. build kind checkout the k8s.io/kubernetes source bring up a cluster to repoduce the issue. build an image based on Andrews changes bring up a cluster with that image validate that the changes have the desired affect. Prerequisites There is a pretty handy tool called gimme put out by the travis-ci folks. Setup

I am looking to validate a set of changes produced by this PR.

https://github.com/kubernetes/kubernetes/pull/77523

In this post I want to show a few things.

  1. setup a go environment.
  2. build kind
  3. checkout the k8s.io/kubernetes source
  4. bring up a cluster to repoduce the issue.
  5. build an image based on Andrews changes
  6. bring up a cluster with that image
  7. validate that the changes have the desired affect.

Prerequisites

There is a pretty handy tool called gimme put out by the travis-ci folks.

This in my opinion is the “best” way to setup a go environment.

Read more about it here

For this setup I am going to leverage direnv to configure go.

We need a system that has gimme and direnv installed.

I will refer you to the instructions in the above links to get this stuff setup in your environment :)

Let’s get started with go!

For this next bit I have created a repo you can checkout and make use of.

In the cast you can see us checking out mauilion/k8s-dev Then we move into the directory and use gimme to configure go via direnv.

We then edit the .envrc file.

I want to take a second to explain why.

unset GOOS;
unset GOARCH;
unset GOPATH;
export GOPATH=${PWD}
export GOROOT='/home/dcooley/.gimme/versions/go1.12.5.linux.amd64';
export PATH="${GOPATH}/bin:/home/dcooley/.gimme/versions/go1.12.5.linux.amd64/bin:${PATH}";
go version >&2;

export GIMME_ENV='/home/dcooley/.gimme/envs/go1.12.5.linux.amd64.env';

I added the GOPATH variable to ensure that when invoked go considers /home/dcooley/k8s-dev the path for go. This is how we can be sure that things like go get -d k8s.io/kubernetes sigs.k8s.io/kind will pull the src into that directory. This is also important as when kind “discovers” the location of your checkout of k8s.io/kubernetes as part of the kind build node-image step it will follow the defined GOPATH.

I am also prepending ${GOPATH}/bin to the ${PATH} variable. So that when we build kind. The kind binary will be in our path. You can also just put kind i

Let’s build our kind node-images

Ok next up we need to build our images.

Since we checked out k8s.io/kubernetes into ${GOPATH}/src/k8s.io/kubernetes we can just run kind build node-image --image=mauilion/node:master

This will create an image in my local docker image cache named mauilion/node:master

Once complete we also have to build the image based on the PR that Andrew provided.

In the ticket we can see src of Andrews PR. andrewsykim:fix-xlb-from-local

So we need to grab that branch and build another image.

The way I do this so as not to mess up the import paths and such is to move into ${GOPATH}/src/k8s.io/kubernetes and run git remote add andrewsykim [email protected]:andrewsykim\kubernetes

Since Andrew is pushing his code to a branch fix-xlb-from-local of his fork [email protected]:andrewsykim/kubernetes of k8s.io/kubernetes

Once the remote is added I can do a git fetch --all and that will pull down all the known branches from all the remotes.

Then we can switch to Andrews branch and build a new kind node-image

Before we move on. Let’s talk about what’s happening when we run kind build node-image --image=mauilion/node:77523

kind is setup to build this image using a container build of kubernetes. This means that kind will “detect” where your local checkout of k8s.io/kubernetes is via your ${GOPATH} then mount that into a container and build all the bits.

the node image will contain all binaries and images needed to run kubernetes as produced from your local checkout of the source.

This is a PRETTY DARN COOL thing!

This means that I can easily setup an environment that will allow me to dig into and validate particular behavior.

Also this is a way to iterate over changes to the codebase.

Alright let’s move on.

Let’s bring up our clusters

In the repo I’ve the following directory structure:

kind/
├── 77523              # a repo with the bits for the 77523 clusters
│   ├── .envrc         # this .envrc will enable direnv to export our kubeconfig for this cluster when we move into this dir.
│   ├── config         # the kind config for this cluster. Basically 1 control plane node and 2 worker nodes
│   ├── km-config.yaml # the metallb configuration for vip addresses
│   └── test.yaml      # the test.yaml has our statically defined pods and service so that we can test.
└── master
    ├── .envrc
    ├── config
    ├── km-config.yaml
    └── test.yaml

In the cast below you can see that we are moving into the directory for each cluster. If you take a look at the .envrc in the directory you can see we are using direnv to export KUBECONFIG and configure kubectl. This is also where the resources for this cluster are defined. We then run something like:

kind create cluster --config config --name=master --image=mauilion/node:master

This does a few things.

  • It creates a cluster where the nodes will follow a naming convention we use in our statically defined test.yaml
  • It will use the node-image that we created in the build step.
  • It will use the config defined and create a cluster of 1 control plane node and 2 worker nodes.

Now for the fun bit. Let’s validate

This PR is setup to fix a behavior in the way that externalTrafficPolicy: Local works.

The problem:

If we bringup a pod on one of two workers and expose that pod with a service of type LoadBalancer. And that service is configured with externalTrafficPolicy: Local. A pod that is configured with hostNetwork: True on the node where the pod is not will fail to connect to the external lb ip. That traffic will be dropped.

The fix:

To fix this behavior Andrew has implemented a another iptables rule.

-A KUBE-XLB-ECF5TUORC5E2ZCRD -s 10.8.0.0/14 -m comment --comment "Redirect pods trying to reach external loadbalancer VIP to clusterIP" -j KUBE-SVC-ECF5TUORC5E2ZCRD

This change enables traffic for a svc from a pod or from the host to be redirected to the service defined by kube-proxy.

Our testing setup:

We have brought up 2 clusters:

  • master
  • 77523

Into each of them we have deployed our test.yaml and metallb and a config for metallb.

The test.yaml is a set of pods that are statically defined. By that I mean that each pod is scheduled to a specific node. We do this by configuring nodeName in the pod spec.

There are 5 pods that we are deploying. echo-77523-worker2 netshoot-77523-worker netshoot-77523-worker2 overlay-77523-worker overlay-77523-worker2

The echo pod is using inanimate/echo-server and from the name you can see that this will be deployed on worker2.

The netshoot pods are set with hostNetwork: True This means that if you exec into the pod you can see the ip stack of the underlying node.

The overlay pods are the same except they are deployed as part of the overlay network and will be given a pod ip

The netshoot and overlay pods are both using nicolaka/netshoot

We also define a svc of type LoadBalancer in each of our clusters.

for our master cluster we use 172.17.255.1:8080 and on the 77523 cluster it’s 172.17.254.1:8080

I am using metallb for this you can read more about metallb here. More about how I use it with kind here

Let’s test it!

From our understanding of the problem I expect that if exec into the netshoot-master-worker pod I will not be able to curl 172.17.255.1:8080

if we try from the 77523 cluster we can see that it does work!

Why does it work now tho?

In the master cluster we can chase down the XLB entry and it looks like this:

:KUBE-XLB-U52O5CQH2XXNVZ54 - [0:0]
-A KUBE-FW-U52O5CQH2XXNVZ54 -m comment --comment "default/echo: loadbalancer IP" -j KUBE-XLB-U52O5CQH2XXNVZ54
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/echo:" -m tcp --dport 30012 -j KUBE-XLB-U52O5CQH2XXNVZ54
-A KUBE-XLB-U52O5CQH2XXNVZ54 -m comment --comment "default/echo: has no local endpoints" -j KUBE-MARK-DROP

in the 77523 cluster:

:KUBE-XLB-U52O5CQH2XXNVZ54 - [0:0]
-A KUBE-FW-U52O5CQH2XXNVZ54 -m comment --comment "default/echo: loadbalancer IP" -j KUBE-XLB-U52O5CQH2XXNVZ54
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/echo:" -m tcp --dport 31972 -j KUBE-XLB-U52O5CQH2XXNVZ54
-A KUBE-XLB-U52O5CQH2XXNVZ54 -m comment --comment "masquerade LOCAL traffic for default/echo: LB IP" -m addrtype --src-type LOCAL -j KUBE-MARK-MASQ
-A KUBE-XLB-U52O5CQH2XXNVZ54 -m comment --comment "route LOCAL traffic for default/echo: LB IP to service chain" -m addrtype --src-type LOCAL -j KUBE-SVC-U52O5CQH2XXNVZ54
-A KUBE-XLB-U52O5CQH2XXNVZ54 -m comment --comment "default/echo: has no local endpoints" -j KUBE-MARK-DROP

The rules that Andrew’s patch adds are:

-A KUBE-XLB-U52O5CQH2XXNVZ54 -m comment --comment "masquerade LOCAL traffic for default/echo: LB IP" -m addrtype --src-type LOCAL -j KUBE-MARK-MASQ
-A KUBE-XLB-U52O5CQH2XXNVZ54 -m comment --comment "route LOCAL traffic for default/echo: LB IP to service chain" -m addrtype --src-type LOCAL -j KUBE-SVC-U52O5CQH2XXNVZ54

And the comments make it pretty clear what’s happening!

Wrap up!

Let’s make sure you wipe out those clusters.

kind delete cluster --name=master
kind delete cluster --name=77523

Also consider running docker system prune --all and docker volume prune every so often to keep your dockers cache tidy :)

shout-out to @a_sykim you should follow him on twitter he’s great!

Thanks!

]]>
Using MetalLb with Kind https://mauilion.dev/posts/kind-metallb/ Wed, 17 Apr 2019 10:44:33 -0700 https://mauilion.dev/posts/kind-metallb/ Preamble: When using metallb with kind we are going to deploy it in l2-mode. This means that we need to be able to connect to the ip addresses of the node subnet. If you are using linux to host a kind cluster. You will not need to do this as the kind node ip addresses are directly attached. If you are using a Mac this tutorial may not be super useful as the way Docker Desktop works on a Mac doesn&rsquo;t expose the &ldquo;docker network&rdquo; to the underlying host. Preamble:

When using metallb with kind we are going to deploy it in l2-mode. This means that we need to be able to connect to the ip addresses of the node subnet. If you are using linux to host a kind cluster. You will not need to do this as the kind node ip addresses are directly attached.

If you are using a Mac this tutorial may not be super useful as the way Docker Desktop works on a Mac doesn’t expose the “docker network” to the underlying host. Due to this restriction I recommend that you make do with kubectl proxy

Problem Statement:

Kubernetes on bare metal doesn’t come with an easy integration for things like services of type LoadBalancer.

This mechanism is used to expose services inside the cluster using an external Load Balancing mechansim that understands how to route traffic down to the pods defined by that service.

Most implementations of this are relatively naive. They place all of the available nodes behind the load balancer and use tcp port knocking to determine if the node is “healthy” enough to forward traffic to it.

You can define an externalTrafficPolicy on a service of type LoadBalancer and this can help get the behaviour that you want. From the docs:

$ kubectl explain service.spec.externalTrafficPolicy
KIND:     Service
VERSION:  v1

FIELD:    externalTrafficPolicy <string>

DESCRIPTION:
     externalTrafficPolicy denotes if this Service desires to route external
     traffic to node-local or cluster-wide endpoints. "Local" preserves the
     client source IP and avoids a second hop for LoadBalancer and Nodeport type
     services, but risks potentially imbalanced traffic spreading. "Cluster"
     obscures the client source IP and may cause a second hop to another node,
     but should have good overall load-spreading.

And Metallb has a decent write up on what they do when you configure this stuff:

https://metallb.universe.tf/usage/#traffic-policies

With Metallb there are a different set of assumptions.

Metallb can operate in two distinct modes.

A Layer 2 mode that will use vrrp to arp out for the external ip or VIP on the lan. This means that all traffic for the service will be attracted to only one node and dispersed across the pods defined by the service fromt there.

A bgp mode with externalTrafficPolicy: local metallb will announce the external ip or VIP from all of the nodes where at least one pod is running.

the bgp mode relies on ecmp to balance traffic back to the pods. ECMP is a great solution for this problem and I HIGHLY recommend you use this model if you can.

That said I haven’t created a bgp router for my kind cluster so we wil use the l2-mode for this experiment.

Let’s do this thing!

First let’s bring up a 2 node kind cluster with the following config.

kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker

Then we need to see if we can ping the node ip of the nodes themselves.

At this point we need to determine the network that is being used for the node ip pool. Since kind nodes are associated with the docker network named “bridge” we can inspect that directly.

I am using a pretty neat tool called jid here that is a repl for json.

So we can see that there is an allocated network of 172.17.0.0/16 in my case.

Let’s swipe the last 10 ip addresses from that allocation and use them for the metallb configuration.

Now we are going to deploy a service!

First let’s create a service of type loadbalancer and see what happens before we install metallb.

I am going to use the echo server for this. I prefer the one built by inanimate. Here is the source and image: inanimate/echo-server

We can see that the EXTERNAL-IP field is pending. This is because there is nothing available in the cluster to manage this type of service.

Now on to the metallb part!

First read the docs https://metallb.universe.tf/installation/

Then we can get started on installing this to our cluster.

We can see that metallb is now installed but we aren’t done yet!

now we need to add a configuration that will use a few of the unused ip addresses from the node ip pool (172.17.0.0/16)

Now if we look at our existing service we can see that the EXTERNAL-IP is still pending

This is because we haven’t yet applied the config for metallb.

Here is the config:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.17.255.1-172.17.255.250    

You can apply this to your cluster with kubectl apply -f https://git.io/km-config.yaml

Let’s see what happens when we apply this.

We can see the svc get’s an ip address immediately.

And we can curl it!

That’s all for now hit me up on twitter or k8s slack with questions!

Shout-out to Jan Guth for the idea on this post!

]]>
Presenting to the San Francisco Kubernetes Meetup about kind!" https://mauilion.dev/posts/kind-demo/ Wed, 10 Apr 2019 15:28:10 -0700 https://mauilion.dev/posts/kind-demo/ On 4/7/2019 I had the opportunity to talk to folks that attended the SF Kubernetes meetup Anaplan about kind! It&rsquo;s a great project and I end up using kind everyday to validate or develop designs for Kubernetes clusters. The slides that I presented are here: mauilion.github.io/kind-demo and a link to the repository with the deck and the content used to bring up the demo cluster is here: github.com/mauilion/kind-demo In the talk I dug in a bit about what kind and kubeadm are and what problems they solve. On 4/7/2019 I had the opportunity to talk to folks that attended the SF Kubernetes meetup Anaplan about kind!

It’s a great project and I end up using kind everyday to validate or develop designs for Kubernetes clusters.

The slides that I presented are here: mauilion.github.io/kind-demo and a link to the repository with the deck and the content used to bring up the demo cluster is here: github.com/mauilion/kind-demo

In the talk I dug in a bit about what kind and kubeadm are and what problems they solve.

I also demonstrated creating a 7 node cluster on my laptop live!

Finally, we spent a little time talking about the way that Docker in Docker is being used here.

My laptop is a recent Lenovo x1 carbon running Ubuntu and i3.

When I bring up a kind cluster I can see the docker containers that I start with a simple docker ps

$ docker ps --no-trunc 
CONTAINER ID                                                       IMAGE                  COMMAND                                  CREATED             STATUS              PORTS                                  NAMES
b8f8ef6d2d97836dc66e09fe5e1a4c7e1b7a880c95372b8d4881288238985f22   kindest/node:v1.13.4   "/usr/local/bin/entrypoint /sbin/init"   12 minutes ago      Up 12 minutes       36533/tcp, 127.0.0.1:36533->6443/tcp   kind-external-load-balancer
69daaf381d8a4dbafb1197502446858e9b6e9e950c0b8db1eb1759dc2883f3ec   kindest/node:v1.13.4   "/usr/local/bin/entrypoint /sbin/init"   12 minutes ago      Up 12 minutes       34675/tcp, 127.0.0.1:34675->6443/tcp   kind-control-plane3
9f577280b62052d5caeecd7483e3283f01d3a3c784c4620efca15338cd0cad23   kindest/node:v1.13.4   "/usr/local/bin/entrypoint /sbin/init"   12 minutes ago      Up 12 minutes       38847/tcp, 127.0.0.1:38847->6443/tcp   kind-control-plane
dfcab2e279ffbb2710dbdaa3386814887d081ddd378641777116b3fed131a3b0   kindest/node:v1.13.4   "/usr/local/bin/entrypoint /sbin/init"   12 minutes ago      Up 12 minutes                                              kind-worker
e486393a724079b77b4aaec5de18fd0aea70f9ce0b46bb6d45edb3382bf3cb32   kindest/node:v1.13.4   "/usr/local/bin/entrypoint /sbin/init"   12 minutes ago      Up 12 minutes       35759/tcp, 127.0.0.1:35759->6443/tcp   kind-control-plane2
be76f1f1ba3c365a5058c2f46b555174c1c6b28418844621e31a2e2c548c5e5f   kindest/node:v1.13.4   "/usr/local/bin/entrypoint /sbin/init"   12 minutes ago      Up 12 minutes                                              kind-worker2
5a845004c40b035a198333a7f8c17eec8c3a024db15f484af4b5d7974e4c27db   kindest/node:v1.13.4   "/usr/local/bin/entrypoint /sbin/init"   12 minutes ago      Up 12 minutes                                              kind-worker3

And if I exec into one of the control plane “nodes” and run docker ps:

root@kind-control-plane:/# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
0904a715c607        18ee25ef69a8           "kube-controller-man…"   11 minutes ago      Up 11 minutes                           k8s_kube-controller-manager_kube-controller-manager-kind-control-plane_kube-system_0139f650b0ebdfe8039809598eafaed5_1
cce01b13d1be        fd722e321590           "kube-scheduler --ad…"   11 minutes ago      Up 11 minutes                           k8s_kube-scheduler_kube-scheduler-kind-control-plane_kube-system_4b52d75cab61380f07c0c5a69fb371d4_1
adb83f623945        calico/node            "start_runit"            11 minutes ago      Up 11 minutes                           k8s_calico-node_calico-node-bkbjv_kube-system_f3ffe8bb-5be3-11e9-a476-024240bbde2e_0
036e0f373c0b        7fe6f0b71640           "/usr/local/bin/kube…"   12 minutes ago      Up 12 minutes                           k8s_kube-proxy_kube-proxy-vnmbc_kube-system_f4010699-5be3-11e9-a476-024240bbde2e_0
57b9c22fa25a        k8s.gcr.io/pause:3.1   "/pause"                 12 minutes ago      Up 12 minutes                           k8s_POD_calico-node-bkbjv_kube-system_f3ffe8bb-5be3-11e9-a476-024240bbde2e_0
f8ccefbb6faf        k8s.gcr.io/pause:3.1   "/pause"                 12 minutes ago      Up 12 minutes                           k8s_POD_kube-proxy-vnmbc_kube-system_f4010699-5be3-11e9-a476-024240bbde2e_0
3b722fb72dd3        4eb4a1578884           "kube-apiserver --au…"   12 minutes ago      Up 12 minutes                           k8s_kube-apiserver_kube-apiserver-kind-control-plane_kube-system_36fd00068b02bdfc674c44e345a08553_0
37ce90751bb7        3cab8e1b9802           "etcd --advertise-cl…"   12 minutes ago      Up 12 minutes                           k8s_etcd_etcd-kind-control-plane_kube-system_a17306e4c3c6a492df6a1ccea459c458_0
b2dab14dc554        k8s.gcr.io/pause:3.1   "/pause"                 12 minutes ago      Up 12 minutes                           k8s_POD_kube-scheduler-kind-control-plane_kube-system_4b52d75cab61380f07c0c5a69fb371d4_0
aa56021201fb        k8s.gcr.io/pause:3.1   "/pause"                 12 minutes ago      Up 12 minutes                           k8s_POD_kube-controller-manager-kind-control-plane_kube-system_0139f650b0ebdfe8039809598eafaed5_0
71d3e0cb6fe2        k8s.gcr.io/pause:3.1   "/pause"                 12 minutes ago      Up 12 minutes                           k8s_POD_kube-apiserver-kind-control-plane_kube-system_36fd00068b02bdfc674c44e345a08553_0
8a2e80860798        k8s.gcr.io/pause:3.1   "/pause"                 12 minutes ago      Up 12 minutes                           k8s_POD_etcd-kind-control-plane_kube-system_a17306e4c3c6a492df6a1ccea459c458_0

and from the underlying node we can the processes that are related to the containers.

 2572 ?        Ssl    1:44 /usr/bin/dockerd --live-restore -H fd://
 2655 ?        Ssl    1:40  \_ docker-containerd --config /var/run/docker/containerd/containerd.toml
10669 ?        Sl     0:00  |   \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/9f577280b62052d5caeecd7483e3283f01d3a
10801 ?        Ss     0:00  |   |   \_ /sbin/init
14598 ?        S<s    0:00  |   |       \_ /lib/systemd/systemd-journald
14736 ?        Ssl    2:18  |   |       \_ /usr/bin/dockerd -H fd://
14958 ?        Ssl    0:33  |   |       |   \_ docker-containerd --config /var/run/docker/containerd/containerd.toml
22752 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8a2e8086079885ea914c5
22816 ?        Ss     0:00  |   |       |       |   \_ /pause
22762 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/71d3e0cb6fe2f988842bb
22852 ?        Ss     0:00  |   |       |       |   \_ /pause
22777 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/aa56021201fb02aa8d855
22846 ?        Ss     0:00  |   |       |       |   \_ /pause
22795 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/b2dab14dc554cdcf40e13
22881 ?        Ss     0:00  |   |       |       |   \_ /pause
23015 ?        Sl     0:03  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/37ce90751bb7b196243f1
23061 ?        Ssl    4:41  |   |       |       |   \_ etcd --advertise-client-urls=https://172.17.0.6:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir
23066 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/3b722fb72dd30e8b3e07f
23126 ?        Ssl    5:30  |   |       |       |   \_ kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.17.0.6 --allow-privileged=true --client-ca-file=/etc/kubernetes/p
24764 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/f8ccefbb6faf067876cf4
24830 ?        Ss     0:00  |   |       |       |   \_ /pause
24779 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/57b9c22fa25a83e4c69ca
24819 ?        Ss     0:00  |   |       |       |   \_ /pause
24895 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/036e0f373c0bcac56484c
24921 ?        Ssl    0:18  |   |       |       |   \_ /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-control-plane
26721 ?        Sl     0:04  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/adb83f623945215c3597a
26746 ?        Ss     0:00  |   |       |       |   \_ /sbin/runsvdir -P /etc/service/enabled
28040 ?        Ss     0:00  |   |       |       |       \_ runsv bird6
28242 ?        S      0:00  |   |       |       |       |   \_ bird6 -R -s /var/run/calico/bird6.ctl -d -c /etc/calico/confd/config/bird6.cfg
28041 ?        Ss     0:00  |   |       |       |       \_ runsv confd
28047 ?        Sl     0:28  |   |       |       |       |   \_ calico-node -confd
28042 ?        Ss     0:00  |   |       |       |       \_ runsv felix
28044 ?        Sl     2:03  |   |       |       |       |   \_ calico-node -felix
28043 ?        Ss     0:00  |   |       |       |       \_ runsv bird
28245 ?        S      0:01  |   |       |       |           \_ bird -R -s /var/run/calico/bird.ctl -d -c /etc/calico/confd/config/bird.cfg
27663 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/cce01b13d1be8c0e434cb
27701 ?        Ssl    1:19  |   |       |       |   \_ kube-scheduler --address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
27704 ?        Sl     0:00  |   |       |       \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/0904a715c607d662900b1
27744 ?        Ssl    0:04  |   |       |           \_ kube-controller-manager --enable-hostpath-provisioner=true --address=127.0.0.1 --allocate-node-cidrs=true --authentication-kubeconfig=/

This is because at each of the layers of abstraction, we are again still sharing the same linux kernel. So when I create containers leveraging something like docker in docker I am still making use of the same resources I would even if I were to run the docker command from the underlying node.

Put another way the docker daemon and all it’s dependencies is running as an application inside the docker container I started. It’s not mounting in the docker socket or any of that just making use of docker and the linux namespaces available to it.

Thanks!

]]>
debugging tools: a preconfigured etcdclient static pod https://mauilion.dev/posts/etcdclient/ Mon, 18 Mar 2019 16:25:23 -0700 https://mauilion.dev/posts/etcdclient/ <p>In this post I am going to discuss <a href="https://git.io/etcdclient.yaml">git.io/etcdclient.yaml</a> and why it&rsquo;s neat!</p> In this post I am going to discuss git.io/etcdclient.yaml and why it’s neat!

When putting together content for a series of blog posts that I am doing around etcd recovery and failure scenarios, I realized that I was configuring the etcdclient to interact with the etcd cluster that kubeadm stands up.

I wanted to create a static pod that would sit on the same node as the static pod that operates the etcd server so that I can use it to troubleshoot the etcd cluster that kubeadm is bringing up.

git.io/etcdclient.yaml is an attempt to DRY (do not repeat yourself) work up.

It makes a set of assumptions.

  1. That etcd has been created by kubeadm as a local etcd
  2. That we have well defined locations for certs on the underlying file system layed down by kubeadm.
  3. That etcd is listening on localhost and a node ip or for our purposes at the very least localhost.

The static pod looks like:

The interesting bits are the env vars that configure etcdclient on your behalf.

With etcd and etcdclient the arguments that you can pass at the cli are also exposed as environment variables.

Now to see it in action!

]]>