Kubebuilder-based Kubernetes operator scaffold for backing up and restoring
PersistentVolumeClaims in OpenStack-based clusters (e.g., Cinder CSI).
This project provides three custom resources to model a PVC backup lifecycle:
Pvc: inventory PVCs in a target namespace.PvSnapshot: request snapshots for a specific PVC.RestorePvc: restore a PVC from a previously created snapshot.
The controllers are scaffolded and ready for business logic to be added; use this repository as a starting point to implement the actual snapshot and restore flows that fit your OpenStack/Kubernetes environment.
Pvc(spec.namespaceShoot): target namespace whose PVCs you want to inventory. Controller logic should list PVCs in that namespace and write their names tostatus.pvcNames. This CR is mostly a helper to discover PVCs that can be snapshotted.PvSnapshot(spec.pvcName,spec.namespace): request snapshots for the given PVC. The reconciler should trigger your CSI snapshot logic (e.g., create aVolumeSnapshotor use Cinder APIs) and reflect resulting snapshot names instatus.snapshotNames.RestorePvc(spec.snapshotName,spec.namespace): restore a new PVC from an existing snapshot. The reconciler should create the PVC (and optionally a pod/job to hydrate it) then surface the resulting PVC name instatus.restoredPvcName.
Suggested end-to-end:
- Create a
Pvcto inventory PVCs in a namespace. - For a chosen PVC name, create a
PvSnapshotto capture a snapshot. - When you need to recover, create a
RestorePvcpointing at the snapshot to produce a new PVC.
- Add RBAC to allow listing/creating PVCs, VolumeSnapshots (if used), and watching namespaces of interest.
- In
PvcReconciler:- Fetch the CR, list PVCs in
spec.namespaceShoot, updatestatus.pvcNames. - Requeue periodically to keep the inventory fresh.
- Fetch the CR, list PVCs in
- In
PvSnapshotReconciler:- Validate
specand fetch the target PVC. - Create snapshot resources (e.g., CSI
VolumeSnapshot) or call OpenStack Cinder APIs; record identifiers instatus.snapshotNames. - Handle idempotency: if snapshots already exist, do not recreate.
- Validate
- In
RestorePvcReconciler:- Validate
spec.snapshotName. - Create a PVC (and storage class params) from the snapshot source.
- Optionally wait for
Boundbefore updatingstatus.restoredPvcName. - Consider adding finalizers to clean up intermediate resources if needed.
- Validate
- Use controller-runtime requeue logic to implement periodic behaviors (e.g.,
inventory refresh or snapshot polling). Return
ctrl.Result{RequeueAfter: time.Minute}when you need a heartbeat loop without external events. - Avoid tight loops; make the interval configurable via an env var/flag.
- For on-demand runs, reconcile reacts to CR changes; for periodic backups you
can add a
schedule-like field and have the reconciler compute when to act.
PvcReconciler: adds a finalizer, sets status conditions, and callsReconcilePvcto:- Fetch shoot kubeconfig via secret, build shoot clients.
- List all namespaces/PVCs in the shoot cluster, collect details into
status.PVCList, and periodically requeue (20m). - Clears the
PvcReconcileAnnotationflag after a run. - On delete: removes finalizer after ensuring the namespace is deleting.
PvSnapshotReconciler: similar lifecycle/conditions/finalizer plusReconcilePvSnapshotto:- Build shoot and dynamic clients, list VolumeSnapshots across namespaces,
and store them in
status.Items. - Clears
PvSnapshotReconcileAnnotationand requeues every 20m. - Handles finalizer removal on delete.
- Build shoot and dynamic clients, list VolumeSnapshots across namespaces,
and store them in
RestorePvcReconciler: finalizer/conditions plusReconcileRestorePvcto:- Build shoot client, check for existing PVC in destination namespace; if already restored from the same snapshot, returns success; if name conflict, marks failed.
- Otherwise creates a PVC from
spec.snapshotNameintospec.desNamespace, updates status with capacity/access modes, and removes reconcile annotations. - On delete: removes finalizer and decrements in-use counters.
SchedulerSnapshotReconciler: orchestrates cron-like snapshot/config backups and retention:- Adds finalizer/conditions, then calls
ReconcileScheduleSnapshot, which builds shoot clients, inventories PVCs, and computes the next requeue based on cron specs and time zones. - For config snapshots, validates cron/locations, triggers
CreateKubeSnapshotCRs when due, and enforces retention by deleting old config snapshots. - For PVC snapshots, validates schedule, ensures PVC exists, triggers
SnapshotCRs vianewCreateSnapshot, and enforces retention (skipping in-use snapshots). - Returns the shortest next
RequeueAfteramong schedules/retention checks and clears reconcile annotations after a run.
- Adds finalizer/conditions, then calls
These controllers expect helper functions (shoot kubeconfig fetch, client builders, snapshot/PVC listing) already present in the codebase.
api/v1beta1: CRD types (Pvc,PvSnapshot,RestorePvc).internal/controller: Reconcilers (scaffolded).config/: Kustomize overlays, RBAC, CRDs, samples.cmd/main.go: Manager entrypoint wiring controllers and scheme.test/e2e: End-to-end tests scaffold.Makefile: Common build/test/lint/deploy targets.
make fmt vet: Format and static checks.make generate: Regenerate deepcopy code.make manifests: Regenerate CRDs/RBAC with controller-gen.make test: Run unit tests with envtest (excludes e2e).make test-e2e: Run e2e tests intest/e2e(Kind or your cluster).make lint/make lint-fix: Run golangci-lint (optionally auto-fix).make run: Run controller locally against your kubeconfig.make build: Build the manager binary.make docker-build docker-push IMG=...: Build/push controller image.make build-installer IMG=...: Producedist/install.yaml(CRDs + deploy).
- go version v1.21.0+
- docker version 17.03+.
- kubectl version v1.11.3+.
- Access to a Kubernetes v1.11.3+ cluster.
Build and push your image to the location specified by IMG:
make docker-build docker-push IMG=<some-registry>/backup-restore-openstack-mfke:tagNOTE: This image ought to be published in the personal registry you specified. And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work.
Install the CRDs into the cluster:
make installDeploy the Manager to the cluster with the image specified by IMG:
make deploy IMG=<some-registry>/backup-restore-openstack-mfke:tagNOTE: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin.
Create instances of your solution You can apply the samples (examples) from the config/sample:
kubectl apply -k config/samples/NOTE: Ensure that the samples has default values to test it out.
Delete the instances (CRs) from the cluster:
kubectl delete -k config/samples/Delete the APIs(CRDs) from the cluster:
make uninstallUnDeploy the controller from the cluster:
make undeployFollowing are the steps to build the installer and distribute this project to users.
- Build the installer for the image built and published in the registry:
make build-installer IMG=<some-registry>/backup-restore-openstack-mfke:tagNOTE: The makefile target mentioned above generates an 'install.yaml' file in the dist directory. This file contains all the resources built with Kustomize, which are necessary to install this project without its dependencies.
- Using the installer
Users can just run kubectl apply -f to install the project, i.e.:
kubectl apply -f https://raw.githubusercontent.com/<org>/backup-restore-openstack-mfke/<tag or branch>/dist/install.yamlContributions are welcome—especially around implementing the reconciliation logic and improving the sample CRDs.
- Fork and branch from
main. - Run
make test(or at leastmake unit-test/make lintif you add them) before opening a PR. - Keep PRs small and focused; include sample manifests for new fields.
- Update docs (this README and
config/samples) when behavior changes.
NOTE: Run make help for more information on all potential make targets
More information can be found via the Kubebuilder Documentation
Copyright 2024.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.