Geoff's BraindumpA spot on the internet where I post random articles, notes, and things I have figured out.
https://itzg.github.io/
Mon, 09 Feb 2026 22:33:09 +0000Mon, 09 Feb 2026 22:33:09 +0000Jekyll v3.10.0Setting Up Kubernetes On A Budget With Ceph Volumes<p>I have been really excited to use <a href="https://kubernetes.io/">Kubernetes</a> (aka k8s) in a semi-production way on my home server; however, there is only one of said server and not the 3+ that seem to be typical of a minimal kubernetes deployment. At the same time, I know there’s <a href="https://github.com/kubernetes/minikube/">minikube</a> and similar, but those give a strong sense of development-time, experimental usage. I want a small scale, but <em>real</em> kubernetes experience.</p>
<p>For me, an additional requirement to make it a real deployment is that it should support stateful services. One of the things I like about kubernetes is that <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/">persistent volumes</a> are a first-class construct supported in various real-world ways. Since I’m working with a tight budget and a single server, I narrowed my search to open source, software-defined storage solutions. <a href="https://ceph.com/">Ceph</a> workeds out well for me since it was easy to setup, scaled down without compromises, and is full of featurea.</p>
<p>In this article I’ll show how to use a little libvirt+KVM+QEMU wrapper script to create three VMs, deploy kubernetes using <code>kube-adm</code> and overlay a ceph cluster using <code>ceph-deploy</code>. The setup might appear tedious, but the benefits and ease of kubernetes usage afterwards are well worth it</p>
<h2 id="my-environment">My environment</h2>
<p>If you’re following along, it might help to know what I’m using to see how that aligns with what you’re using. My one and only home server is an <a href="https://www.intel.com/content/www/us/en/products/boards-kits/nuc.html">Intel NUC</a> with</p>
<ul>
<li>i5 dual-core, hyperthreaded processor</li>
<li>16 GB RAM</li>
<li>240 GB SSD with about 100 GB dedicated to VM images</li>
<li>Ubuntu Xenial 16.04.4</li>
<li>Kernel 4.4.0-124</li>
</ul>
<p>It has plenty of RAM for three VMs, but I’m going to be knowingly overcommitting the vCPU count of 6 (2 x 3 VMs) since there’s only 4 logical processors on my system. I’m not going to be running stressful workloads, so hopefully that works out.</p>
<h2 id="setup-virtual-machines">Setup virtual machines</h2>
<h3 id="baremetal-or-vms">Baremetal or VMs?</h3>
<p>My original desire was to avoid virtual machines (VMs) entirely and run both kubernetes and ceph directly “on metal”. I found that was technically possible with options such as “untainting” the k8s master node; however, striving for zero downtime really requires distinct nodes in order to achieve seamless cluster scheduling.</p>
<p>As such, one of the wheels I re-invented was to create a small tool to help me create VMs with minimal requirements. All that it requires is installation of the packages:</p>
<ul>
<li>libvirt-bin</li>
<li>qemu-kvm</li>
</ul>
<h3 id="networking-setup-for-virtualization">Networking setup for virtualization</h3>
<p>Since I’ll eventually be port forwarding certain traffic from my ISP’s cable modem, I need the VMs to be directly routable on my LAN. For that I needed to configure the Linux kernel bridging by <a href="https://wiki.libvirt.org/page/Networking#Altering_the_interface_config">following this</a>.</p>
<p>In my case, I set my <code>/etc/network/interfaces</code> with the following since I am using my LAN’s DHCP server to assign a fixed IP address. So the <code>dhcp</code> option eliminates the need specify address, gateway, and DNS.</p>
<pre><code>auto br0
iface br0 inet dhcp
bridge_ports eno1
bridge_maxwait 0
bridge_fd 0
</code></pre>
<p>To make sure iptable filtering and such of the host doesn’t interfere with the guest VMs doing their own, we ease packet forwarding by creating <code>/etc/sysctl.d/60-bridge-virt.conf</code> with</p>
<pre><code>net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
</code></pre>
<p>To apply those settings immediately, run:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> /lib/systemd/systemd-sysctl --prefix<span class="token operator">=</span>/net/bridge
</code></pre>
<p>To ensure the settings are applied on reboot after networking is started, then create the file <code>/etc/udev/rules.d/99-bridge.rules</code> with the content:</p>
<pre><code>ACTION=="add", SUBSYSTEM=="net", KERNEL!="lo", RUN+="/lib/systemd/systemd-sysctl --prefix=/net/bridge"
</code></pre>
<h3 id="create-vms-using-libvirtkvmqemu">Create VMs using libvirt+KVM+QEMU</h3>
<p>Download my helper script <a href="https://github.com/itzg/libvirt-tools">from my libvirt-tools repo</a> and set it to be executable:</p>
<pre><code>wget https://raw.githubusercontent.com/itzg/libvirt-tools/master/create-vm.sh
chmod +x create-vm.sh
</code></pre>
<p>I am going to create three VMs. My network has static IP address space starting at 192.168.0.150, so I’m configuring my VMs starting from that address.</p>
<p>The volume on my machine that contains <code>/var/lib/libvirt/images</code> on my system only has 112G so I am going to allocate an extra disk device of 20G for the three VMs. In total they will use 3 x (8G root + 20G extra) = 84G. The volumes are thin provisioned, but it’s still good to plan for maximum usage since I have limited host volume space (SSD == good for speed, but bad for capacity).</p>
<p>View full usage and default values of the helper script by running</p>
<pre class=" language-bash"><code class="prism language-bash">./create-vms.sh --help
</code></pre>
<p>Make sure you have an SSH key setup since the helper script will tell cloud-init to install your current user’s ssh key for access as the user <code>ubuntu</code>. To confirm, list your keys using:</p>
<pre class=" language-bash"><code class="prism language-bash">ssh-keygen -l
</code></pre>
<p>Create the first VM:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> ./create-vm.sh --ip-address 192.168.0.150 --extra-volume 20G nuc-vm1
</code></pre>
<p>You should now be able to ssh to the VM as the user <code>ubuntu</code></p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">ssh</span> [email protected]
</code></pre>
<p>however, if that doesn’t seem to be working you can attach to the VM’s console using:</p>
<pre class=" language-bash"><code class="prism language-bash">virsh console nuc-vm1
</code></pre>
<p><em>Use Control-] to detach from the console.</em></p>
<p>Repeat the same invocation for the other two VMs changing the IP address and name:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> ./create-vm.sh --ip-address 192.168.0.151 --extra-volume 20G nuc-vm2
<span class="token function">sudo</span> ./create-vm.sh --ip-address 192.168.0.152 --extra-volume 20G nuc-vm3
</code></pre>
<h3 id="simplify-ssh-access">Simplify ssh access</h3>
<p>Adding <code>/etc/hosts</code> entries for the VMs and their IPs will help ease the remainder of the setup tasks. In my case I added these:</p>
<pre><code>192.168.0.150 nuc-vm1
192.168.0.151 nuc-vm2
192.168.0.152 nuc-vm3
</code></pre>
<p>To avoid having to specify the default <code>ubuntu</code> user for ssh’ing to each VM, add this, replacing the host names with yours, to <code>~/.ssh/config</code>:</p>
<pre><code>Host nuc-vm1 nuc-vm2 nuc-vm3
User ubuntu
</code></pre>
<p>Confirm you can ssh into each, which also gives you a chance to confirm and accept the host fingerprint for later steps:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">ssh</span> nuc-vm1
</code></pre>
<p>While you’re in there you could also upgrade packages to get the VMs up to date before installing more stuff:</p>
<pre><code>sudo apt update
sudo apt upgrade -y
# ...and sudo reboot if the kernel was included in the upgraded packages
</code></pre>
<h2 id="create-kubernetes-cluster">Create kubernetes cluster</h2>
<h3 id="pre-flight-checks">Pre-flight checks</h3>
<p>ssh to the first VM and <a href="https://kubernetes.io/docs/tasks/tools/install-kubeadm/">install kubeadm and its prerequisites</a>.</p>
<p>The VMs were configured (via defaults) to generate unique MAC addresses for their bridged network device, but here’s an example of confirming:</p>
<pre><code>$ ssh nuc-vm1 ip link | grep "link/ether"
link/ether 52:54:00:91:76:e5 brd ff:ff:ff:ff:ff:ff
$ ssh nuc-vm2 ip link | grep "link/ether"
link/ether 52:54:00:93:4c:83 brd ff:ff:ff:ff:ff:ff
$ ssh nuc-vm3 ip link | grep "link/ether"
link/ether 52:54:00:b7:88:ad brd ff:ff:ff:ff:ff:ff
</code></pre>
<p>Likewise, the VMs were created with the default behavior of generating a UUID per domain/node. Here is confirmation of that:</p>
<pre><code>$ ssh nuc-vm1 sudo cat /sys/class/dmi/id/product_uuid
781526BF-31E7-4339-8424-6B886A432968
$ ssh nuc-vm2 sudo cat /sys/class/dmi/id/product_uuid
AF93665F-6137-4125-9480-08B99B040BE8
$ ssh nuc-vm3 sudo cat /sys/class/dmi/id/product_uuid
C76F05E7-0289-4479-91CA-3D47A48096F4
</code></pre>
<h3 id="install-docker-and-kubernetes-packages">Install Docker and Kubernetes packages</h3>
<p>Install the recommended version of Docker by ssh’ing into the first node and starting an interactive sudo session with</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> -i
</code></pre>
<p>Use the installation snippet provided in the <a href="https://kubernetes.io/docs/tasks/tools/install-kubeadm/#installing-docker">Installing Docker</a> section.</p>
<p>Still in the interactive sudo session, <a href="https://kubernetes.io/docs/tasks/tools/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl">install kubeadm, kubelet, and kubectl</a>.</p>
<p>Repeat the same for the other nodes <a href="https://kubernetes.io/docs/tasks/tools/install-kubeadm/#installing-docker">installing Docker</a> and steps after that.</p>
<h3 id="create-the-cluster">Create the cluster</h3>
<p><strong>Before</strong> running <code>kubeadm init</code> skip to the <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network">pod network section</a> to see what parameters should be passed. I’m going to use kube-router, so I’ll pass <code>--pod-network-cidr=10.244.0.0/16</code>. You can find more information about using <a href="https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md">kube-router with kubeadm here</a>.</p>
<p>The full kubeadm command to run is:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> kubeadm init --pod-network-cidr<span class="token operator">=</span>10.244.0.0/16
</code></pre>
<p>Use the commands it provides at the end to enable kubectl access from the regular <code>ubuntu</code> user on the VM:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">mkdir</span> -p <span class="token variable">$HOME</span>/.kube
<span class="token function">sudo</span> <span class="token function">cp</span> -i /etc/kubernetes/admin.conf <span class="token variable">$HOME</span>/.kube/config
<span class="token function">sudo</span> <span class="token function">chown</span> <span class="token variable"><span class="token variable">$(</span><span class="token function">id</span> -u<span class="token variable">)</span></span><span class="token keyword">:</span><span class="token variable"><span class="token variable">$(</span><span class="token function">id</span> -g<span class="token variable">)</span></span> <span class="token variable">$HOME</span>/.kube/config
</code></pre>
<p>Be sure to note the <code>kubeadm join</code> command it provided since it will be used when joining the other two VMs to the cluster.</p>
<p>Now you can install the networking add-on, kube-router in this case:</p>
<pre class=" language-bash"><code class="prism language-bash">kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
</code></pre>
<p>After about 30 seconds you should see the <code>kube-router</code> pods running using:</p>
<pre class=" language-bash"><code class="prism language-bash">kubectl get pods --all-namespaces -w
</code></pre>
<h3 id="join-the-other-two-vms-to-the-kubernetes-cluster">Join the other two VMs to the kubernetes cluster</h3>
<p>ssh to the next VM and become root with <code>sudo -i</code>. Then use the <code>kubectl join</code> command output from the <code>init</code> call earlier, such as</p>
<pre class=" language-bash"><code class="prism language-bash">kubeadm <span class="token function">join</span> 192.168.0.150:6443 --token b2pp7j<span class="token punctuation">..</span><span class="token punctuation">..</span> --discovery-token-ca-cert-hash sha256:<span class="token punctuation">..</span>.
</code></pre>
<p>If you ssh back to the first node, you should see the new node become ready after about 30 seconds:</p>
<pre class=" language-bash"><code class="prism language-bash">$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
nuc-vm1 Ready master 22h v1.10.3
nuc-vm2 Ready <span class="token operator"><</span>none<span class="token operator">></span> 23s v1.10.3
</code></pre>
<p>Repeat the <code>kubeadm join</code> on the third VM.</p>
<h3 id="create-a-distinct-user-config">Create a distinct user config</h3>
<p>On the master node’s VM I ran the following and saved its output to a file on my desktop:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> kubeadm alpha phase kubeconfig user --client-name itzg
</code></pre>
<p>By default that user can’t do much of anything, but you can quickly fix that by creating a cluster role binding to the builtin <code>cluster-admin</code> role. In my case I ran:</p>
<pre class=" language-bash"><code class="prism language-bash">kubectl create clusterrolebinding itzg-cluster-admin --clusterrole<span class="token operator">=</span>cluster-admin --user<span class="token operator">=</span>itzg
</code></pre>
<p>I used a naming convention of <code><user>-<role></code>, but you can use whatever naming convention you want for the first argument – it just needs to distinctly name the binding of user(s) to role(s).</p>
<h2 id="create-ceph-cluster">Create ceph cluster</h2>
<p>Now we’ll switch gears and ignore the fact that the three VMs are participating in a kubernetes cluster. We’ll overlay a ceph cluster onto them. My goal is to create a storage pool dedicated to kubernetes that can be used to provision rbd volumes.</p>
<p>In the same spirit as <code>kubeadm</code> ceph provides an extremely useful tool called <code>ceph-deploy</code> that will take care of setting up our VMs as a ceph cluster. It also comes with <a href="http://docs.ceph.com/docs/master/start/">great instructions, that start here</a>.</p>
<p>At the time of this writing, the <a href="http://docs.ceph.com/docs/master/releases/">latest stable release</a> is <code>luminous</code>, so that’s what I’ll be using in the next few steps.</p>
<p>Most of the steps I’ll run with <code>ceph-deploy</code> will be invoked from the main/baremetal host, but really it can be done from any node that can ssh to the three VMs.</p>
<h3 id="setup-ceph-deploy">Setup ceph-deploy</h3>
<p>First add the release key</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">wget</span> -q -O- <span class="token string">'https://download.ceph.com/keys/release.asc'</span> <span class="token operator">|</span> <span class="token function">sudo</span> apt-key add -
</code></pre>
<p>Then, add the repository:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token keyword">echo</span> deb https://download.ceph.com/debian-luminous/ <span class="token variable"><span class="token variable">$(</span>lsb_release -sc<span class="token variable">)</span></span> main <span class="token operator">|</span> <span class="token function">sudo</span> <span class="token function">tee</span> /etc/apt/sources.list.d/ceph.list
</code></pre>
<p>Finally, update the repo index and install <code>ceph-deploy</code></p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> apt update
<span class="token function">sudo</span> apt <span class="token function">install</span> ceph-deploy
</code></pre>
<p>The tool is going to place some important config files in the current directory, so it’s a good idea to create a new directory and work within there:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">mkdir</span> ceph-cluster
<span class="token function">cd</span> ceph-cluster
</code></pre>
<p>Before proceeding, make sure you did the <code>~/.ssh/config</code> setup above to configure <code>ubuntu</code> as the default ssh user for accessing the three VMs. Since that user already has password-less sudo access, it’s ready to use for ceph deployment.</p>
<h3 id="initialize-cluster-config">Initialize cluster config</h3>
<p>Initialize the ceph cluster configuration to point at what will be the initial monitor node. Think of the ceph monitor kind of like the kubernetes master/api node.</p>
<pre class=" language-bash"><code class="prism language-bash">ceph-deploy new nuc-vm1
</code></pre>
<h3 id="install-packages-on-ceph-nodes">Install packages on ceph nodes</h3>
<p>Now, install the ceph packages on all three VMs. I had to pass the <code>--release</code> argument to avoid it defaulting to the prior “jewel” release.</p>
<pre class=" language-bash"><code class="prism language-bash">ceph-deploy <span class="token function">install</span> --release luminous nuc-vm1 nuc-vm2 nuc-vm3
</code></pre>
<h3 id="create-a-ceph-monitor">Create a ceph monitor</h3>
<p>After a few minutes of installing packages, the initial monitor can be setup:</p>
<pre class=" language-bash"><code class="prism language-bash">ceph-deploy mon create-initial
</code></pre>
<p>To enable automatic use of <code>ceph</code> on the three nodes, distribute the cluster config using</p>
<pre class=" language-bash"><code class="prism language-bash">ceph-deploy admin nuc-vm1 nuc-vm2 nuc-vm3
</code></pre>
<h3 id="create-a-ceph-manager">Create a ceph manager</h3>
<p>As of the luminous release, a manager node is required. I’m just going to run that also on the first VM:</p>
<pre class=" language-bash"><code class="prism language-bash">ceph-deploy mgr create nuc-vm1
</code></pre>
<h3 id="setup-each-node-as-a-ceph-osd">Setup each node as a ceph OSD</h3>
<p>Way back in the beginning of all this, you might remember that I specified an extra volume size of 20GB for each VM. That extra volume is <code>/dev/vdc</code> on each VM and will be used for the OSD on each:</p>
<pre class=" language-bash"><code class="prism language-bash">ceph-deploy osd create --data /dev/vdc nuc-vm1
ceph-deploy osd create --data /dev/vdc nuc-vm2
ceph-deploy osd create --data /dev/vdc nuc-vm3
</code></pre>
<p>You can confirm the ceph cluster is healthy and contains the three OSDs by running:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">ssh</span> nuc-vm1 <span class="token function">sudo</span> ceph -s
</code></pre>
<p>To enable running ceph commands from the host, copy the admin keyring and config file into the local <code>/etc/ceph</code>:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> <span class="token function">cp</span> ceph.client.admin.keyring ceph.conf /etc/ceph/
</code></pre>
<h2 id="joining-ceph-and-kubernetes">Joining ceph and kubernetes</h2>
<h3 id="create-a-storage-pool">Create a storage pool</h3>
<p>Create the pool with 128 placement groups, as <a href="http://docs.ceph.com/docs/master/rados/operations/placement-groups/">recommended here</a>:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> ceph osd pool create kube 128
</code></pre>
<p>In order to avoid the libceph error “missing required protocol features” when kubelet mounts the rbd volume apply this adjustment:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> ceph osd crush tunables legacy
</code></pre>
<h3 id="create-and-store-access-keys">Create and store access keys</h3>
<p>Export the ceph admin key and import it as a kubernetes secret:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> ceph auth get client.admin<span class="token operator">|</span><span class="token function">grep</span> <span class="token string">"key = "</span> <span class="token operator">|</span><span class="token function">awk</span> <span class="token string">'{print <span class="token variable">$3</span>'</span><span class="token punctuation">}</span> <span class="token operator">|</span><span class="token function">xargs</span> <span class="token keyword">echo</span> -n <span class="token operator">></span> /tmp/secret
kubectl create secret generic ceph-admin-secret \
--type<span class="token operator">=</span><span class="token string">"kubernetes.io/rbd"</span> \
--from-file<span class="token operator">=</span>/tmp/secret
</code></pre>
<p>Create a new ceph client for accessing the <code>kube</code> pool specifically and import that as a kubernetes secret:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> ceph auth get-or-create client.kube mon <span class="token string">'allow r'</span> osd <span class="token string">'allow class-read object_prefix rbd_children, allow rwx pool=kube'</span><span class="token operator">|</span><span class="token function">grep</span> <span class="token string">"key = "</span> <span class="token operator">|</span><span class="token function">awk</span> <span class="token string">'{print <span class="token variable">$3</span>'</span><span class="token punctuation">}</span> <span class="token operator">|</span><span class="token function">xargs</span> <span class="token keyword">echo</span> -n <span class="token operator">></span> /tmp/secret
kubectl create secret generic ceph-secret \
--type<span class="token operator">=</span><span class="token string">"kubernetes.io/rbd"</span> \
--from-file<span class="token operator">=</span>/tmp/secret
</code></pre>
<p>I might be doing something wrong, but I found I had to also save the <code>client.kube</code> keyring on each of the kubelet nodes:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">ssh</span> nuc-vm1 <span class="token function">sudo</span> ceph auth get client.kube -o /etc/ceph/ceph.client.kube.keyring
<span class="token function">ssh</span> nuc-vm2 <span class="token function">sudo</span> ceph auth get client.kube -o /etc/ceph/ceph.client.kube.keyring
<span class="token function">ssh</span> nuc-vm3 <span class="token function">sudo</span> ceph auth get client.kube -o /etc/ceph/ceph.client.kube.keyring
</code></pre>
<h3 id="setup-rbd-provisioner">Setup RBD provisioner</h3>
<p>The out-of-tree RBD provisioner is pre-configured to manage dynamic allocation of RBD persistent volume claims, so download and extract the necessary files:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">wget</span> https://github.com/kubernetes-incubator/external-storage/archive/master.zip
unzip master.zip <span class="token string">"external-storage-master/ceph/rbd/*"</span>
</code></pre>
<p>Go into the <code>deploy</code> directory and apply the provisioner:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">cd</span> external-storage-master/ceph/rbd/deploy
kubectl apply -f rbac
</code></pre>
<h3 id="define-a-storage-class">Define a storage class</h3>
<p>Create a storage class definition file, such as <code>storage-class.yaml</code> containing:</p>
<pre class=" language-yaml"><code class="prism language-yaml"><span class="token key atrule">kind</span><span class="token punctuation">:</span> StorageClass
<span class="token key atrule">apiVersion</span><span class="token punctuation">:</span> storage.k8s.io/v1
<span class="token key atrule">metadata</span><span class="token punctuation">:</span>
<span class="token key atrule">name</span><span class="token punctuation">:</span> rbd
<span class="token key atrule">provisioner</span><span class="token punctuation">:</span> ceph.com/rbd
<span class="token key atrule">parameters</span><span class="token punctuation">:</span>
<span class="token key atrule">monitors</span><span class="token punctuation">:</span> 192.168.0.150<span class="token punctuation">:</span><span class="token number">6789</span>
<span class="token key atrule">pool</span><span class="token punctuation">:</span> kube
<span class="token key atrule">adminId</span><span class="token punctuation">:</span> admin
<span class="token key atrule">adminSecretName</span><span class="token punctuation">:</span> ceph<span class="token punctuation">-</span>admin<span class="token punctuation">-</span>secret
<span class="token key atrule">userId</span><span class="token punctuation">:</span> kube
<span class="token key atrule">userSecretName</span><span class="token punctuation">:</span> ceph<span class="token punctuation">-</span>secret
<span class="token key atrule">imageFormat</span><span class="token punctuation">:</span> <span class="token string">"2"</span>
</code></pre>
<p>Apply the storage class:</p>
<pre class=" language-bash"><code class="prism language-bash">kubectl apply -f storage-class.yaml
</code></pre>
<h3 id="try-it-out">Try it out</h3>
<p>Test the storage class by applying a persistent volume claim:</p>
<pre class=" language-yaml"><code class="prism language-yaml"><span class="token key atrule">kind</span><span class="token punctuation">:</span> PersistentVolumeClaim
<span class="token key atrule">apiVersion</span><span class="token punctuation">:</span> v1
<span class="token key atrule">metadata</span><span class="token punctuation">:</span>
<span class="token key atrule">name</span><span class="token punctuation">:</span> claim1
<span class="token key atrule">spec</span><span class="token punctuation">:</span>
<span class="token key atrule">accessModes</span><span class="token punctuation">:</span>
<span class="token punctuation">-</span> ReadWriteOnce
<span class="token key atrule">storageClassName</span><span class="token punctuation">:</span> rbd
<span class="token key atrule">resources</span><span class="token punctuation">:</span>
<span class="token key atrule">requests</span><span class="token punctuation">:</span>
<span class="token key atrule">storage</span><span class="token punctuation">:</span> 1Gi
</code></pre>
<p>Verify that the claim was provisioned and is bound by using</p>
<pre class=" language-bash"><code class="prism language-bash">kubectl describe pvc claim1
</code></pre>
<p>Let’s test otf the persistent volume backed by ceph rbd by running a little busybox container that just sleeps so that we can <code>exec</code> into it:</p>
<pre class=" language-yaml"><code class="prism language-yaml"><span class="token key atrule">apiVersion</span><span class="token punctuation">:</span> v1
<span class="token key atrule">kind</span><span class="token punctuation">:</span> Pod
<span class="token key atrule">metadata</span><span class="token punctuation">:</span>
<span class="token key atrule">name</span><span class="token punctuation">:</span> ceph<span class="token punctuation">-</span>pod1
<span class="token key atrule">spec</span><span class="token punctuation">:</span>
<span class="token key atrule">containers</span><span class="token punctuation">:</span>
<span class="token punctuation">-</span> <span class="token key atrule">name</span><span class="token punctuation">:</span> ceph<span class="token punctuation">-</span>busybox
<span class="token key atrule">image</span><span class="token punctuation">:</span> busybox
<span class="token key atrule">command</span><span class="token punctuation">:</span> <span class="token punctuation">[</span><span class="token string">"sleep"</span><span class="token punctuation">,</span> <span class="token string">"60000"</span><span class="token punctuation">]</span>
<span class="token key atrule">volumeMounts</span><span class="token punctuation">:</span>
<span class="token punctuation">-</span> <span class="token key atrule">name</span><span class="token punctuation">:</span> data
<span class="token key atrule">mountPath</span><span class="token punctuation">:</span> /data
<span class="token key atrule">readOnly</span><span class="token punctuation">:</span> <span class="token boolean important">false</span>
<span class="token key atrule">volumes</span><span class="token punctuation">:</span>
<span class="token punctuation">-</span> <span class="token key atrule">name</span><span class="token punctuation">:</span> data
<span class="token key atrule">persistentVolumeClaim</span><span class="token punctuation">:</span>
<span class="token key atrule">claimName</span><span class="token punctuation">:</span> claim1
</code></pre>
<p>Now exec into the pod using</p>
<pre class=" language-bash"><code class="prism language-bash">kubectl <span class="token function">exec</span> -it ceph-pod1 sh
</code></pre>
<p>You can <code>cd /data</code> and touch/edit files in that directory, delete the pod, and re-create a new one to confirm the content from the persistent volume claim sticks around.</p>
<blockquote>
<p>Written with <a href="https://stackedit.io/">StackEdit</a>.</p>
</blockquote>
Thu, 24 May 2018 00:00:00 +0000
https://itzg.github.io/2018/05/24/setting-up-kubernetes-on-a-budget-with-ceph-volumes.html
https://itzg.github.io/2018/05/24/setting-up-kubernetes-on-a-budget-with-ceph-volumes.htmlElegant array to set conversion with Java 8 streams<p>Before Java 8, it was always a little awkward to take a given array of objects and populate a <code class="language-plaintext highlighter-rouge">Set</code> (or other collection) out of them. The various collections have constructors that accept other collections, so you could use <code class="language-plaintext highlighter-rouge">Arrays.toList(...)</code>. However…let’s use an elegant, Java 8 way instead:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public Set<String> convert(String... members) {
return Stream.of(members)
.collect(Collectors.toSet());
}
</code></pre></div></div>
Sat, 04 Mar 2017 00:00:00 +0000
https://itzg.github.io/2017/03/04/arrays-to-sets.html
https://itzg.github.io/2017/03/04/arrays-to-sets.htmlDealing with inconsistent JSON with Jackson deserializer tweaking<h2 id="dealing-with-inconsistent-json-with-jackson-deserializer-tweaking">Dealing with inconsistent JSON with Jackson deserializer tweaking</h2>
<p>Let’s say we have a model class we want to read from JSON:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Data
public class NotWeird {
String name;
Details details;
@Data
public static class Details {
int count;
}
}
</code></pre></div></div>
<p>but we need to deal with varying structure of <code class="language-plaintext highlighter-rouge">details</code> like this</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"name": "weird1",
"details": 5
}
</code></pre></div></div>
<p>and this</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"name": "weird2",
"details": {
"count": 6
}
}
</code></pre></div></div>
<p>In our fabricated scenario, maybe there was an older version of our JSON structure where details used to be a single field. Or perhaps we’re provided a human-friendly shortcut where only setting <code class="language-plaintext highlighter-rouge">count</code> of a details doesn’t required the whole object.</p>
<p>To recreate this scenario and create a solution, let’s create a Spring Boot batch-like application that will autowire a Jackson <code class="language-plaintext highlighter-rouge">ObjectMapper</code> for us.</p>
<p>We’ll use these dependencies in our <code class="language-plaintext highlighter-rouge">pom.xml</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</code></pre></div></div>
<p>Even though we’re not creating a web app, we need the <code class="language-plaintext highlighter-rouge">org.springframework.http.converter.json.Jackson2ObjectMapperBuilder</code> provided by <code class="language-plaintext highlighter-rouge">spring-web</code>. We’ll disable the web application context by using <code class="language-plaintext highlighter-rouge">SpringApplicationBuilder</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@SpringBootApplication
public class InconsistentJsonApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(InconsistentJsonApplication.class)
.web(false)
.run(args);
}
}
</code></pre></div></div>
<p>Before getting to an application runner, let’s establish a properties component:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ConfigurationProperties("app") @Component @Data
public class AppProperties {
boolean failOnEmptyFileList = true;
boolean exitWhenFinished = true;
}
</code></pre></div></div>
<p>We’ll wire that and the <code class="language-plaintext highlighter-rouge">ObjectMapper</code> auto configured by Boot into an <code class="language-plaintext highlighter-rouge">ApplicationRunner</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Slf4j
@Service
public class Loader implements ApplicationRunner {
private ObjectMapper objectMapper;
private AppProperties properties;
@Autowired
public Loader(ObjectMapper objectMapper, AppProperties properties) {
this.objectMapper = objectMapper;
this.properties = properties;
}
</code></pre></div></div>
<p>In our runner we’ll “load” the JSON content by reading them into POJOs, logging, and exiting:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Override
public void run(ApplicationArguments args) throws Exception {
if (properties.isFailOnEmptyFileList()) {
Assert.notEmpty(args.getNonOptionArgs(),
"Pass at least one JSON filename on the command line");
}
List<NotWeird> objects = args.getNonOptionArgs().stream()
.map(this::load)
.collect(Collectors.toList());
log.info("Loaded: {}", objects);
if (properties.isExitWhenFinished()) {
System.exit(0);
}
}
private NotWeird load(String filename) {
try {
final NotWeird obj = objectMapper.readValue(
new File(filename), NotWeird.class);
return obj;
} catch (IOException e) {
log.warn("Unable to process file {}", filename, e);
return null;
}
}
</code></pre></div></div>
<p>If you run the code at this point, the <code class="language-plaintext highlighter-rouge">readValue</code> will fail on the first JSON snippet as follows (with newlines added for clarity):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>com.fasterxml.jackson.databind.JsonMappingException:
Can not construct instance of me.itzg.json.model.NotWeird$Details:
no int/Int-argument constructor/factory method to deserialize from Number value (5)
at [Source: weird1.json; line: 3, column: 14]
(through reference chain: me.itzg.json.model.NotWeird["details"])
</code></pre></div></div>
<p>There are probably several ways to solve this, but let’s use a solution where we can intercept the deserialization of <code class="language-plaintext highlighter-rouge">details</code> and handle it in a polymorphic/adapting way.</p>
<p>By declaring a <code class="language-plaintext highlighter-rouge">com.fasterxml.jackson.databind.Module</code> component, Boot will take care of autowiring that into any <code class="language-plaintext highlighter-rouge">ObjectMapper</code>s :</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Component
public class CustomModule extends Module {
@Override
public String getModuleName() {
return "custom";
}
@Override
public Version version() {
return new Version(1, 0, 0, null, null, null);
}
</code></pre></div></div>
<p>The meat of our solution involves adding a <code class="language-plaintext highlighter-rouge">com.fasterxml.jackson.databind.deser.BeanDeserializerModifier</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Override
public void setupModule(SetupContext context) {
context.addBeanDeserializerModifier(new BeanDeserializerModifier() {
@Override
public JsonDeserializer<?> modifyDeserializer(DeserializationConfig config,
BeanDescription beanDesc,
JsonDeserializer<?> deserializer) {
return deserializer;
}
});
}
</code></pre></div></div>
<p>Already with those code our application will run, but will still complain about the first JSON snippet since we didn’t actually modify the deserializer yet. Let’s do that by using <code class="language-plaintext highlighter-rouge">replaceProperty</code> of <code class="language-plaintext highlighter-rouge">BeanDeserializer</code>.</p>
<p>That method takes two <code class="language-plaintext highlighter-rouge">SettableBeanProperty</code> objects, the original and the one to put in its place. Where do we get those? It turns out that we can a) ask the existing deserializer for one and b) derive a new instance from that one.</p>
<p>We’ll be good and only tweak the class we want especially since we know that a <code class="language-plaintext highlighter-rouge">BeanDeserializer</code> is indeed used in that case:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public JsonDeserializer<?> modifyDeserializer(DeserializationConfig config,
BeanDescription beanDesc,
JsonDeserializer<?> deserializer) {
if (NotWeird.class.isAssignableFrom(beanDesc.getBeanClass())) {
// ...
}
</code></pre></div></div>
<p>Now we can ask it for the property definition it is using by default for our “details” field:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if (NotWeird.class.isAssignableFrom(beanDesc.getBeanClass())) {
final BeanDeserializer beanDeserializer = (BeanDeserializer) deserializer;
final SettableBeanProperty property = beanDeserializer.findProperty("details");
// ...
}
</code></pre></div></div>
<p>With <code class="language-plaintext highlighter-rouge">property</code> we can derive a customized instance with our deserializer:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>final SettableBeanProperty property = beanDeserializer.findProperty("details");
final SettableBeanProperty ourProp =
property.withValueDeserializer(new DetailsDeserializer());
beanDeserializer.replaceProperty(property, ourProp);
</code></pre></div></div>
<p>All that’s left is to implement that <code class="language-plaintext highlighter-rouge">DetailsDeserializer</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class DetailsDeserializer extends JsonDeserializer<NotWeird.Details> {
@Override
public NotWeird.Details deserialize(JsonParser p, DeserializationContext ctxt)
throws IOException, JsonProcessingException {
// ...return one
}
}
</code></pre></div></div>
<p>As a starting point, we’ll read it as an object (which will throw the same exception):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public NotWeird.Details deserialize(JsonParser p, DeserializationContext ctxt)
throws IOException, JsonProcessingException {
final NotWeird.Details details =
p.getCodec().readValue(p, NotWeird.Details.class);
return details;
}
</code></pre></div></div>
<p>Let’s hope for the best and wrap the default strategy in a try-catch, use the raw parse tree, and try to process the entry as a single numeric value:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NotWeird.Details details;
try {
details = p.getCodec().readValue(p, NotWeird.Details.class);
} catch (JsonMappingException e) {
final TreeNode treeNode = p.getCodec().readTree(p);
final JsonToken token = treeNode.asToken();
details = new NotWeird.Details();
if (token.isNumeric()) {
// ...
} else {
ctxt.handleUnexpectedToken(NotWeird.Details.class, token, p,
"Unsupported content for details");
}
</code></pre></div></div>
<p>The given <code class="language-plaintext highlighter-rouge">DeserializationContext</code> has several “handle*” methods, which are meant for exactly these cases.</p>
<p>Let’s optimistically parse it as an integer and set <code class="language-plaintext highlighter-rouge">count</code> of the details with that value:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if (token.isNumeric()) {
final String str = treeNode.toString();
try {
details.setCount(
Integer.parseInt(str)
);
} catch (NumberFormatException e1) {
ctxt.handleWeirdStringValue(NotWeird.Details.class, str,
"Could not parse into an integer");
}
}
</code></pre></div></div>
<p>Finally, with that surgically precise adjustment, the application processes both JSON snippets and produces the log (newlines added for clarity) we wanted:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Loaded: [
NotWeird(name=weird1, details=NotWeird.Details(count=5)),
NotWeird(name=weird2, details=NotWeird.Details(count=6))]
</code></pre></div></div>
Fri, 03 Mar 2017 00:00:00 +0000
https://itzg.github.io/2017/03/03/jackson-deserialize-inconsistent-json.html
https://itzg.github.io/2017/03/03/jackson-deserialize-inconsistent-json.htmlRun a Minecraft server container with attached host directory<p>Attaching host directories to a Docker container instance and getting the read/write permissions correct can actually be a little tricky. By default attached directories are accessed as the same user running the Docker daemon, which is usually <code class="language-plaintext highlighter-rouge">root</code>.</p>
<p>My <a href="https://hub.docker.com/r/itzg/minecraft-server/">Minecraft server image</a> has a security feature where the java process runs as a non-root user (UID=1000 by default); however, that adds another level of complexity to attached host directory access if your current user isn’t set at user ID 1000.</p>
<p>To counteract that possible mismatch environment variable options are provided to define the user ID and group ID of the process that will run java and access the <code class="language-plaintext highlighter-rouge">/data</code> attach point:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">-e UID=uid</code></li>
<li><code class="language-plaintext highlighter-rouge">-e GID=gid</code></li>
</ul>
<h2 id="hands-on-example">Hands-on Example</h2>
<p>First, setup a host directory owned and modifiable by your current user:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo mkdir /minecraft
sudo chown -R $(id -un) /minecraft
</code></pre></div></div>
<p>At this point the directory is empty, but we can startup a new container and let it populate that directory with defaults:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run -d \
--name mc-attached-data \
-v /minecraft:/data \
-e UID=$(id -u) -e GID=$(id -g) \
-e EULA=TRUE \
-p 25565:25565 \
itzg/minecraft-server
</code></pre></div></div>
<p>From the host-side you can browse the active Minecraft data files:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ls /minecraft
</code></pre></div></div>
<p>You can even stop and remove the container and the populated host directory remains intact. At this point you can adjust the <code class="language-plaintext highlighter-rouge">server.properties</code>, replace the world data with one you downloaded, etc.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker stop mc-attached-data
docker rm mc-attached-data
vi /minecraft/server.properties
</code></pre></div></div>
<p>At any time after that you can start up a fresh new container, attach the same directory to <code class="language-plaintext highlighter-rouge">/data</code>, and this server instance will use that previously configured data directory. In the following example I gave the container a distinct name, “mc-reuse”.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run -d \
--name mc-reuse \
-v /minecraft:/data \
-e UID=$(id -u) -e GID=$(id -g) \
-e EULA=TRUE \
-p 25565:25565 \
itzg/minecraft-server
</code></pre></div></div>
Sun, 21 Aug 2016 00:00:00 +0000
https://itzg.github.io/2016/08/21/run-mc-server-attached-host-directory.html
https://itzg.github.io/2016/08/21/run-mc-server-attached-host-directory.htmlLessons learned switching to Java Alpine Docker base image<p>Somewhere around the <a href="https://hub.docker.com/r/library/java/tags/8u92-jre-alpine/">Java 8u92</a> versions, the <a href="https://hub.docker.com/r/_/java/">official Docker images for Java</a> started providing Alpine variants of the base images. I believe the benefits of that option are smaller footprint and smaller security attack surface.</p>
<p>Both are fantastic benefits, but there’s some gotchas if you’re used to a Debian (or similar) base image.</p>
<h3 id="doesnt-include-curl-or-wget">Doesn’t include curl or wget</h3>
<p>For the reasons above, the Alpine variant doesn’t come with <code class="language-plaintext highlighter-rouge">curl</code> or <code class="language-plaintext highlighter-rouge">wget</code>. Avoid the temptation to use <code class="language-plaintext highlighter-rouge">apk</code> to install those packages and instead take this as a reminder to better leverage Docker.</p>
<p>The <code class="language-plaintext highlighter-rouge">ADD</code> Dockerfile instruction already supports source files retrieved via URL. So rather than using <code class="language-plaintext highlighter-rouge">RUN</code> to issue a curl/wget, just add a file from a URL natively, such as</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ADD https://somehost/somefile.zip /tmp/somefile.tgz
</code></pre></div></div>
<h3 id="useradd-is-adduser">useradd is adduser</h3>
<p>If your <code class="language-plaintext highlighter-rouge">Dockerfile</code> or scripts are using the command <code class="language-plaintext highlighter-rouge">useradd</code> (from Debian land), then use <code class="language-plaintext highlighter-rouge">adduser</code> instead. <strong>BUT</strong>, check out the next couple of sections.</p>
<h3 id="default-system-user-has-no-shell">Default system user has no shell</h3>
<p>Creating a system user with</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>adduser -S sysuser
</code></pre></div></div>
<p>will result in a user than can’t be used with an <code class="language-plaintext highlighter-rouge">su -c</code> invocation since no shell is configured by default. Instead explicitly set a shell, such as</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>adduser -S -s /bin/sh sysuser
</code></pre></div></div>
<h3 id="use-su-to-drop-privileges">Use su to drop privileges</h3>
<p>Since <code class="language-plaintext highlighter-rouge">sudo</code> isn’t installed by default and we want to minimize adding more packages (to lower security exposure), then use <code class="language-plaintext highlighter-rouge">su</code> instead. To run a command as a system user (with an explicit shell configured as above), use something like</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>su -c "command" sysuser
</code></pre></div></div>
<h3 id="dont-use-an-extra-dash-in-su">Don’t use an extra dash in su</h3>
<p>Originally I was putting a dash before the username when invoking <code class="language-plaintext highlighter-rouge">su</code> thinking that it was a definitive way to tell it where the command ending, like</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>su -c "command" - sysuser
</code></pre></div></div>
<p>It turns out giving the dash disables propagation of the environment variables, which is bad since Docker containers use environment variables to pass inputs.</p>
Sat, 09 Jul 2016 00:00:00 +0000
https://itzg.github.io/2016/07/09/java-alpine-images.html
https://itzg.github.io/2016/07/09/java-alpine-images.htmlManually allocate orphaned ES shards to a node<h2 id="or-why-are-no-shards-being-allocated-to-a-node">…or: why are no shards being allocated to a node?</h2>
<p>Do a</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET _cat/shards
</code></pre></div></div>
<p>and you’ll see something like</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>logstash-2015.01.08-raw 4 p STARTED 10238 1.1mb 172.17.0.2 es-01
logstash-2015.01.08-raw 4 r UNASSIGNED
test 4 p UNASSIGNED
test 4 r UNASSIGNED
test 0 p UNASSIGNED
test 0 r UNASSIGNED
test 3 p UNASSIGNED
test 3 r UNASSIGNED
test 1 p UNASSIGNED
test 1 r UNASSIGNED
test 2 p UNASSIGNED
test 2 r UNASSIGNED
</code></pre></div></div>
<p>but that’s annoying because we added a second node…but nothing is allocated to it:</p>
<p>Doing</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET _cat/allocation
</code></pre></div></div>
<p>I’m getting</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>516 13.6gb 1gb 14.7gb 92 58fc33034c60 172.17.0.2 es-01
0 13.6gb 1gb 14.7gb 92 ff50dd0b4d0e 172.17.0.26 es-00
594 UNASSIGNED
</code></pre></div></div>
<p>What’s up with <code class="language-plaintext highlighter-rouge">es-00</code>? Let’s try manually allocating a shard to it:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>POST _cluster/reroute
{
"commands": [
{
"allocate": {
"index": "test",
"shard": 0,
"node": "es-00",
"allow_primary": true
}
}
]
}
</code></pre></div></div>
<p>which responds with (newlines inserted for clarity)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"error": "RemoteTransportException[[es-01][inet[/172.17.0.2:9300]][cluster:admin/reroute]];
nested: ElasticsearchIllegalArgumentException[
[allocate] allocation of [test][0] on node [es-00][lE3e-Po_QQiXMk_n10jO9Q][ff50dd0b4d0e][inet[/192.168.0.100:9300]] is not allowed,
YES(shard is not allocated to same node or host)]
[YES(shard is primary)][YES(below primary recovery limit of [4])]
[YES(no allocation awareness enabled)][YES(total shard limit disabled: [-1] <= 0)]
[NO(less than required [15.0%] free disk on node, free: [7.08796015987245%])]
[YES(no snapshots are currently running)]
]; ",
"status": 400
}
</code></pre></div></div>
<p>Looking for the <code class="language-plaintext highlighter-rouge">NO</code> items, we find the cause:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NO(less than required [15.0%] free disk on node, free: [7.08796015987245%])
</code></pre></div></div>
<blockquote>
<p>Written with <a href="https://stackedit.io/">StackEdit</a>.</p>
</blockquote>
Wed, 06 Jul 2016 20:28:48 +0000
https://itzg.github.io/elasticsearch/2016/07/06/alloc-shard-in-es.html
https://itzg.github.io/elasticsearch/2016/07/06/alloc-shard-in-es.htmlelasticsearchUsing CircleCI to perform Maven Releases<h2 id="your-project">Your Project</h2>
<p>The following operations are performed after you have at least initialized your project for git source control with <code class="language-plaintext highlighter-rouge">git init</code>.</p>
<h3 id="add-build-support-module">Add build-support module</h3>
<p>A helper script is needed to execute the Maven release (and deploy) sequence, so first add the <code class="language-plaintext highlighter-rouge">build-support</code> sub-module to your project:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git submodule add https://github.com/moorkop/build-support.git
</code></pre></div></div>
<h3 id="settingsxml">settings.xml</h3>
<p>Add a <code class="language-plaintext highlighter-rouge">settings.xml</code> to your project’s base directory to convey the server credentials needed to publish to Bintray. The following content can be used as is for that file:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><?xml version='1.0' encoding='UTF-8'?>
<settings xsi:schemaLocation='http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd'
xmlns='http://maven.apache.org/SETTINGS/1.0.0' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'>
<servers>
<server>
<id>bintray</id>
<username>${env.BINTRAY_USER}</username>
<password>${env.BINTRAY_API_KEY}</password>
</server>
</servers>
</settings>
</code></pre></div></div>
<h3 id="pomxml">pom.xml</h3>
<p>Some specifics will need to be declared within your <code class="language-plaintext highlighter-rouge">pom.xml</code>.</p>
<h4 id="scm-section">SCM section</h4>
<p>Add the following section to the top level of your <code class="language-plaintext highlighter-rouge">pom.xml</code> but replace the following placeholders with your GitHub specifics:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">[[user]]</code></li>
<li><code class="language-plaintext highlighter-rouge">[[repo]]</code></li>
</ul>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><scm>
<connection>scm:git:https://github.com/[[user]]/[[repo]].git</connection>
<url>https://github.com/[[user]]/[[repo]]</url>
<tag>HEAD</tag>
</scm>
</code></pre></div></div>
<h4 id="plugins-section">Plugins section</h4>
<p>Declare the latest deploy plugin to avoid a bug in older versions that would leave out the <code class="language-plaintext highlighter-rouge">git commit</code> step. Add the following text as is in the <code class="language-plaintext highlighter-rouge">build > plugins</code> section:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><plugin>
<artifactId>maven-release-plugin</artifactId>
<version>2.5.3</version>
</plugin>
</code></pre></div></div>
<h4 id="distribution-management-section">Distribution management section</h4>
<p>To complement the Bintray configuration in <code class="language-plaintext highlighter-rouge">settings.xml</code>, add the following text as is to the top level of your <code class="language-plaintext highlighter-rouge">pom.xml</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><distributionManagement>
<repository>
<id>bintray</id>
<name>bintray</name>
<url>https://api.bintray.com/maven/${env.BINTRAY_REPO_OWNER}/${env.BINTRAY_REPO}/${project.artifactId}/;publish=1
</url>
</repository>
</distributionManagement>
</code></pre></div></div>
<h3 id="circleyml">circle.yml</h3>
<p>We’ll hook the release process into the <code class="language-plaintext highlighter-rouge">deployment</code> stage of your CircleCI build configuration. Use the following snippet as an example for yours. You can specify your mainline branch instead of <code class="language-plaintext highlighter-rouge">master</code>, if needed.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>deployment:
releases:
branch: master
commands:
- build-support/handle-mvn-release.sh
</code></pre></div></div>
<p>Since a git sub-module is being used you’ll also need to add this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>checkout:
post:
- git submodule sync
- git submodule update --init
</code></pre></div></div>
<h2 id="bintray">Bintray</h2>
<p>Next, you will need to prepare a Bintray repository and project.</p>
<h3 id="api-key">API Key</h3>
<p>Instead of your password, you will use your API key to deploy to Bintray. Obtain your API key from <a href="https://bintray.com/profile/edit">your profile, in the edit section</a>.</p>
<h3 id="create-project-entry">Create project entry</h3>
<p>Create a project entry with the same name as your project’s <code class="language-plaintext highlighter-rouge">artifactId</code> and note the name of the containing repository since you’ll need that for the <code class="language-plaintext highlighter-rouge">BINTRAY_REPO</code> variable, below.</p>
<h2 id="circleci-project">CircleCI project</h2>
<p>You’re almost done now. The last piece is configuring the project environment variables and push access for GitHub.</p>
<h3 id="setup-github-user-key-access">Setup GitHub user key access</h3>
<p>To enable git push access from within your builds, go to your project’s settings and the section:</p>
<blockquote>
<p>Permissions > Checkout SSH keys</p>
</blockquote>
<p>Click the action to “Create and add user key”, as shown here:</p>
<p><img src="https://i.imgur.com/AK1BFHV.png" alt="enter image description here" /></p>
<h3 id="setup-project-environment-variables">Setup project environment variables</h3>
<p>Also in your project’s settings, add these environment variables:</p>
<ul>
<li>BINTRAY_USER</li>
<li>BINTRAY_API_KEY</li>
<li>BINTRAY_REPO_OWNER</li>
<li>BINTRAY_REPO</li>
</ul>
<h3 id="double-check-ubuntu-version">Double-check Ubuntu Version</h3>
<p>At the time of writing this, the <code class="language-plaintext highlighter-rouge">git push</code> works only with the choice of “Ubuntu 12.04” in the project’s Build Environment settings:
<img src="https://i.imgur.com/3dEJlUb.png" alt="enter image description here" /></p>
<h2 id="perform-a-release">Perform a release</h2>
<p>When you’re ready to perform a release, invoke the <a href="https://circleci.com/docs/parameterized-builds/">CircleCI parameterized build API</a> to trigger a build.</p>
<p>In the POST body, replace these placeholders:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">[[tag]]</code> : this will be the Git tag of the release and the default release name. GitHub recommends you prefix the tag with a “v” and use <a href="http://semver.org/">semantic versioning</a>.</li>
<li><code class="language-plaintext highlighter-rouge">[[version]]</code> : this will be the Maven project’s version, which is typically the same as the <code class="language-plaintext highlighter-rouge">tag</code> but without the “v” prefix.</li>
<li><code class="language-plaintext highlighter-rouge">[[next]]</code> : this is the next snapshot version you’ll want to use for continued development after the release. I recommend using a two-part shortening of the semantic version with the minor version bumped up to the next.</li>
<li><code class="language-plaintext highlighter-rouge">[[email]]</code> : your email address as configured in your GitHub account</li>
</ul>
<p>In the POST URL, replace these parameters (or use the param editor in a tool like <a href="https://www.getpostman.com/">Postman</a>):</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">:username</code> : GitHub username</li>
<li><code class="language-plaintext highlighter-rouge">:project</code> : GitHub project name</li>
<li><code class="language-plaintext highlighter-rouge">:branch</code> : the branch to build, which needs to match the <code class="language-plaintext highlighter-rouge">branch</code> configured in your <code class="language-plaintext highlighter-rouge">circle.yml</code>, above.</li>
<li><code class="language-plaintext highlighter-rouge">:token</code> : an API token allocated from the <a href="https://circleci.com/account/api">API Tokens section in your account settings</a></li>
</ul>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl -X POST -H "Content-Type: application/json" -d '{
"build_parameters": {
"MVN_RELEASE_TAG": "[[tag]]",
"MVN_RELEASE_VER": "[[version]]",
"MVN_RELEASE_DEV_VER": "[[next]]-SNAPSHOT",
"MVN_RELEASE_USER_EMAIL": "[[email]]",
"MVN_RELEASE_USER_NAME": "Via CircleCI"
}
}' "https://circleci.com/api/v1/project/:username/:project/tree/:branch?circle-token=:token
</code></pre></div></div>
<h2 id="enjoy">Enjoy</h2>
<p>Your API-triggered-build includes “Build Parameters” that you can inspect later to see what exact release was performed:
<img src="https://i.imgur.com/UNlEuxm.png" alt="enter image description here" /></p>
Wed, 06 Jul 2016 00:00:00 +0000
https://itzg.github.io/2016/07/06/using-circleci-maven-releases.html
https://itzg.github.io/2016/07/06/using-circleci-maven-releases.html