I recently upgraded the storage in my main NAS (as apposed to my all NVME mini NAS) which I use for more longer term “cold” storage. It cost me a twice as much as it would have done, if I’d bought the drives back in November when I started planning this, but that is a totally separate post/rant (2 disk mirrored storage from 2TiB to 12TiB).
There are 2 main reasons for the upgrade, first the drives were over 10 years old and the second is to use it as a backup target for a bunch of things, mainly
- Snapshots from the NVME NAS
- As an off site backup target for my Dad’s NAS full of his photo collection
My Dad’s NAS is another Synology device (that we will also be upgrading from 2Tib to 8TiB, again mainly due to the age of the existing drives) and this comes with a tool called HyperBackup which supports a bunch of different targets to send data to, but the 2 most useful remote options are
- Another Synology NAS
- An S3 Bucket
I don’t really want to directly expose my or my Dad’s Synology device to the Internet, so an S3 bucket is way forward.
S3 Buckets
S3 is a standard that started out as an offering from Amazon as part of AWS. It’s basically a better version of WebDAV that allows Objects (files) to be stored and access via HTTP.
I’ve previously run Minio as a S3 Bucket server in my local Kubernetes cluster, as a place to push photos/videos from my phone and as a restic local backup target.
But they recently made some big changes to move all the front end UI tools tools to their Enterprise licensed paid for product. They have also announced that they have stopped development on the Open Source version.
As I work for a company that makes a product that’s core is Open Source but also sells an Enterprise licensed version, I sort of understand the need to differentiate between the free version of a the paid version. Doing that by taking away features that were previously in the Open Source version feels a little like a rug pull to me, but anyway…
As well as a web GUI for configuring buckets Minio also supported things like AWS style policies that control what different keys can access at a fine grain level, Object Versioning and OIDC SSO login.
I’m still running an older version that is only exposed to my LAN that still has all the old features, but I thought I’d try an alternative for this remote backup target solution.
Garage
Garage is a little more basic than Minio, It doesn’t offer Object Versioning and access keys can have read/write/owner privileges not the finer grained access, but it can do clustering and block level de-duplication to only store data once on a given cluster node (you can configure how many copies to keep across a multi-node cluster).
If you want to expose a bucket as a Web Site then this is done on a separate subdomain rather than sharing the same hostname as the S3 API endpoint as is the case with AWS S3 or Minio.
It can be installed with a helm chart so it was mainly a case of passing values to setup the storage backend and configuring the Ingress hostnames.
As mentioned previously I have a iSCSI backed StorageClass installed in my cluster that allows me to provision PVCs directly against my Synology NAS, so I’m using that here.
One small quirk was even though the project has a multi-arch manifest on Docker Hub, the helm chart defaults to only pointing at the AMD64 containers so it failed to install properly first time on my cluster because I have 2 ARM64 (pi 4) nodes and the pod was initially scheduled on one of these. I had to override the default container image.
Full set of values passed to the helm chart here:
garage:
replicationFactor: 1
s3:
api:
region: us-east-1
rootDomain: ".garage.example.com"
web:
rootDomain: ".garage-web.k8s.loc"
index: "index.html"
image:
repository: dxflrs/garage
tag: v2.2.0
deployment:
replicaCount: 1
persistence:
meta:
storageClass: "nfs-client"
size: 1Gi
data:
storageClass: "synology-iscsi"
size: 3Ti
ingress:
s3:
api:
enabled: true
className: public
annotations:
cert-manager.io/cluster-issuer: smallstep
hosts:
- host: garage.k8s.loc
paths:
- path: /
pathType: Prefix
- host: garage.example
paths:
- path: /
pathType: Prefix
- host: "*.garage.k8s.loc"
paths:
- path: /
pathType: Prefix
- host: "*.garage.example.com"
paths:
- path: /
pathType: Prefix
tls:
- secretName: garage-s3-cert
hosts:
- garage.k8s.loc
- "*.garage.k8s.loc"
web:
enabled: true
className: public
annotations:
cert-manager.io/cluster-issuer: smallstep
hosts:
- host: "*.garage-web.k8s.loc"
paths:
- path: /
pathType: Prefix
tls:
- secretName: garage-web-cert
hosts:
- "*.garage-web.k8s.loc"
monitoring:
metrics:
enabled: true
The Ingress configuration sets it up to listen on both my internal subdomain and my public and issue HTTPS certificates using my internal CA for the internal names. The external hostnames will be proxied via my externally facing Nginx instance and get public HTTP certificates from LetsEncrypt.
The metrics service is enabled because it exposes the Admin API which I’ll need later.
I also had to create the “layout” manually using kubectl to run the required commands in the pod
kubectl exec --stdin --tty -n garage garage-0 -- ./garage status
kubectl exec -it -n garage garage-0 -- ./garage layout assign -z us-east-1 -c 3T <node_id>
kubectl exec -it -n garage garage-0 -- ./garage layout apply --version 1
The Web Site domain is only exposed internally for now, and I can always expose it later using some nginx regex magic, e.g.
upstream kube {
server kube-one.local:443;
server kube-two.local:443;
server kube-three.local:443;
server kube-four.local:443;
keepalive 20;
}
server {
listen 80;
listen [::]:80;
server_name ~^(?<subdomain>[^.]+).example.com;
location / {
proxy_pass https://kube;
proxy_set_header Host $subdomain.garage-web.k8s.loc;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
}
}
Web GUI
There is a 3rd party web gui for Garage which I’ve managed to get running in the same kubernetes namespace. This allows me to create new buckets and keys without needing to resort to the command line.

There isn’t a helm chart for the gui, but I managed to put together a manifest to get it deployed and configured to talk to the Garage install.
I had to manually create an admin key and use htpasswd to generate an admin password hash.
kubectl exec -it -n garage garage-0 -- ./garage admin-token create web-gui-token
I copied the admin key and password hash into the manifest:
apiVersion: v1
kind: Secret
metadata:
name: garage-webui
type: Opaque
stringData:
API_ADMIN_KEY: "xxxxxxxxx"
AUTH_USER_PASS: "admin:$2y$10$xxxxxxxx"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: garage-webui
data:
API_BASE_URL: http://garage-metrics:3903
S3_REGION: us-east-1
S3_ENDPOINT_URL: http://garage:3900
---
apiVersion: v1
kind: Service
metadata:
name: garage-webui
spec:
selector:
app: garage-webui
ports:
- port: 3909
targetPort: 3909
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: garage-webui
labels:
app.kubernetes.io/name: garage-webui
annotations:
cert-manager.io/cluster-issuer: smallstep
spec:
rules:
- host: garage-webui.k8s.loc
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: garage-webui
port:
number: 3909
tls:
- hosts:
- garage-webui.k8s.loc
secretName: garage-webui-cert
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: garage-webui
spec:
selector:
matchLabels:
app: garage-webui
template:
metadata:
labels:
app: garage-webui
spec:
containers:
- name: garage-webui
image: khairul169/garage-webui:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
requests:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3909
envFrom:
- secretRef:
name: garage-webui
- configMapRef:
name: garage-webui
The admin API of the Garage install is exposed on the metrics service I enabled earlier.
Next
Now it’s all setup and running I need to arrange time to head back up to my folks for a visit, I’ll take an external hard drive with me an have HypeBackup create an initial snapshot on that. I can then copy this into the S3 bucket when I get home. This is because Dad’s photo collection is already about 1TiB which would take ages to push even over FTTP (now I finally have it installed)
Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway
(AWS offers a version of this called Snowball, or Snowmobile depending on scale, otherwise known as sneakernet)
Once the backup snapshot are copied into the bucket I’ll be able to change the setting in HyperBackup to point to the S3 endpoint and it will then do incremental backups based on the snapshot.
Acknowledgements
A bunch of this was inspired by @jwildeboer and his blog posts about using garage, especially the web ui.








