Skip to content

Support container restore through CRI/Kubernetes#10365

Merged
estesp merged 1 commit intocontainerd:mainfrom
adrianreber:2024-06-19-restore-create-start
Mar 11, 2025
Merged

Support container restore through CRI/Kubernetes#10365
estesp merged 1 commit intocontainerd:mainfrom
adrianreber:2024-06-19-restore-create-start

Conversation

@adrianreber
Copy link
Contributor

@adrianreber adrianreber commented Jun 19, 2024

This implements container restore as described in:

https://kubernetes.io/blog/2022/12/05/forensic-container-checkpointing-alpha/#restore-checkpointed-container-standalone

For detailed step by step instruction also see contrib/checkpoint/checkpoint-restore-cri-test.sh

The code changes are based on changes I have done in Podman around 2018 and CRI-O around 2020.

The history behind restoring container via CRI/Kubernetes probably requires some explanation. The initial proposal to bring checkpoint/restore to Kubernetes was looking at pod checkpoint and restoring and the corresponding CRI changes.

kubernetes-sigs/cri-tools#662 kubernetes/kubernetes#97194

After discussing this topic for about two years another approach was implemented as described in KEP-2008:

kubernetes/enhancements#2008

"Forensic Container Checkpointing" allowed us to separate checkpointing from restoring. For the "Forensic Container Checkpointing" it is enough to create a checkpoint of the container. Restoring is not necessary as the analysis of the checkpoint archive can happen without restoring the container.

While thinking about a way to restore a container it was by coincidence that we started to look into restoring containers in Kubernetes via Create and Start. The way it was done in CRI-O is to figure out during Create if the container image is a checkpoint image and if that is true we are using another code path. The same was implemented now with this change in containerd.

With this change it is possible to restore the container from a checkpoint tar archive that is created during checkpointing via CRI.

To restore a container via Kubernetes we convert the tar archive to an OCI image as described in the kubernetes.io blog post from above. Using this OCI image it is possible to restore a container in Kubernetes.

At this point I think it should be doable to restore containers in CRI-O and containerd no matter if they have been created by containerd or CRI-O. The biggest difference is the container metadata and that can be adapted during restore.

Open items:

  • It is not clear to me why restoring a container in containerd goes through task/Create(). But as the restore code already exists this change extended the existing code path to restore a container in task/Create() to also restore a container through the CRI via Create and Start.
  • Automatic image pulling. containerd does not pull images automatically if created via the CRI. There is an option in crictl to pull images before starting, but that uses the CRI image pull interface. It is still a separate pull and create operation. Restoring containers from an OCI image is a bit different. The checkpoint OCI image does not include the base image, but just a reference to the image (NAME@DIGEST). Using crictl with pulling will enable the pulling of the checkpoint image, but not of the base image the checkpoint is based on. So during preparation of the checkpoint containerd will automatically pull the base image, but I was not able how to pull an image blockingly in containerd. So there is a for loop waiting for the container image to appear in the internal store. I think this probably can be implemented better.

Anyway, this is a first step towards container restored in Kubernetes when using containerd.

@k8s-ci-robot
Copy link

Hi @adrianreber. Thanks for your PR.

I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@xhejtman
Copy link

xhejtman commented Jul 5, 2024

Hello, I tested in K8s with the following:

Setup Env

K8s: 1.30.2 (RKE2)
Distro: Ubuntu 24.04
Kernel: 6.8.0-36
CRIU: v3.18-266-g6f92787b7
containerd: 2.0.0-rc3 + this PR
kubelet featuregate: ContainerCheckpoint=true

Results

Checkpoint

curl -q -s -X POST "https://localhost:10250/checkpoint/$CONTAINER" \
        --insecure \
        --cert /var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt \
        --key /var/lib/rancher/rke2/server/tls/client-kube-apiserver.key

creates a checkpoint, however, criu (/etc/criu/runc.conf) must NOT set custom log path (e.g. log-file=/tmp/criu.log), or checkpoint will fail (rpc error: code = Unknown desc = open /var/lib/rancher/rke2/agent/containerd/io.containerd.grpc.v1.cri/containers/161976044213c3871a13f8989c11933e240f0735da61bd5cf83694a94b1a1af6/dump.log: no such file or directory)

buildah needs to set different annotation to the one with crio:

  • for crio: --annotation=io.kubernetes.cri-o.annotations.checkpoint.name=
  • for containerd: --annotation=org.criu.checkpoint.container.name=

Restore

  • cgroupv1 support only. with cgroupv2, restored process is killed immediately
  • rootfs-diff.tar is not restored - restore actually fails in criu
  • container after restore does not log (i.e., log output is empty)
  • Restoring simple shell command works (/bin/bash -c i=0; while true; do sleep 1; echo $i; i=$[i+1]; done)

@adrianreber adrianreber force-pushed the 2024-06-19-restore-create-start branch from f602c12 to 25da2f2 Compare July 10, 2024 16:00
@adrianreber
Copy link
Contributor Author

@xhejtman Thanks for testing. I rebased the PR to apply cleanly on the latest git checkout of containerd.

The CRIU log file error you mentioned should be gone now. I do not think you need to explicitly set the feature gate for container checkpointing since it defaults to on since going Beta in 1.30.

@xhejtman
Copy link

The CRIU log file error you mentioned should be gone now. I do not think you need to explicitly set the feature gate for container checkpointing since it defaults to on since going Beta in 1.30.

Yes, tested again, the CRIU log issue is gone.

I just noticed one more thing: if checkpointed container did not specify command/entrypoint in the docker file, Pod manifest for restore needs to specify at least dummy command, or error will be raised: no command specified in restore process.

@adrianreber adrianreber force-pushed the 2024-06-19-restore-create-start branch from 25da2f2 to 29ec7b2 Compare July 15, 2024 15:50
@adrianreber
Copy link
Contributor Author

@xhejtman I added my Kubernetes test script in contrib/checkpoint and container logging should now also work.

@adrianreber adrianreber force-pushed the 2024-06-19-restore-create-start branch 5 times, most recently from 6fef265 to 1d3793e Compare July 16, 2024 16:43
@adrianreber
Copy link
Contributor Author

CI looks finally happy. One more feature is missing before this is ready. The rootfs changes are not yet applied to the restored container, so a container which changes files in the container will probably fail restoring. This should be an easy change as it is not much more than applying the existing tar file to the container rootfs.

@adrianreber adrianreber force-pushed the 2024-06-19-restore-create-start branch from 1d3793e to 23e2f4b Compare July 18, 2024 07:40
@adrianreber adrianreber reopened this Jul 18, 2024
@adrianreber adrianreber marked this pull request as ready for review July 18, 2024 09:21
@adrianreber adrianreber reopened this Jul 18, 2024
@dosubot dosubot bot added the area/cri Container Runtime Interface (CRI) label Jul 18, 2024
@adrianreber adrianreber force-pushed the 2024-06-19-restore-create-start branch from 0e2e386 to 527f09f Compare January 8, 2025 15:52
@adrianreber
Copy link
Contributor Author

@estesp I renamed the variable you pointed out. Was there more in your review than renaming. Not sure I understood it completely correct.

@estesp
Copy link
Member

estesp commented Jan 16, 2025

@estesp I renamed the variable you pointed out. Was there more in your review than renaming. Not sure I understood it completely correct.

That was all; thanks!

@jakubsolecki
Copy link

Hello @adrianreber, what's the status of this PR? Feedback seems to be addressed

@adrianreber
Copy link
Contributor Author

Hello @adrianreber, what's the status of this PR? Feedback seems to be addressed

I don't know what the status is. I am happy to apply any code review suggestions.

@adrianreber
Copy link
Contributor Author

Rebased

@djdongjin
Copy link
Member

make vendor should fix CI / Project Checks (pull_request) failure

@adrianreber
Copy link
Contributor Author

make vendor should fix CI / Project Checks (pull_request) failure

Thank you.

Copy link
Member

@AkihiroSuda AkihiroSuda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but a few nits.

Sorry for taking too long to review this

@adrianreber
Copy link
Contributor Author

LGTM but a few nits.

Sorry for taking too long to review this

Thanks. I tried to address are your review comments.

This implements container restore as described in:

https://kubernetes.io/blog/2022/12/05/forensic-container-checkpointing-alpha/#restore-checkpointed-container-standalone

For detailed step by step instruction also see contrib/checkpoint/checkpoint-restore-cri-test.sh

The code changes are based on changes I have done in Podman around 2018
and CRI-O around 2020.

The history behind restoring container via CRI/Kubernetes probably
requires some explanation. The initial proposal to bring
checkpoint/restore to Kubernetes was looking at pod checkpoint and
restoring and the corresponding CRI changes.

kubernetes-sigs/cri-tools#662
kubernetes/kubernetes#97194

After discussing this topic for about two years another approach was
implemented as described in KEP-2008:

kubernetes/enhancements#2008

"Forensic Container Checkpointing" allowed us to separate checkpointing
from restoring. For the "Forensic Container Checkpointing" it is enough
to create a checkpoint of the container. Restoring is not necessary as
the analysis of the checkpoint archive can happen without restoring the
container.

While thinking about a way to restore a container it was by coincidence
that we started to look into restoring containers in Kubernetes via
Create and Start. The way it was done in CRI-O is to figure out during
Create if the container image is a checkpoint image and if that is true
we are using another code path. The same was implemented now with this
change in containerd.

With this change it is possible to restore the container from a
checkpoint tar archive that is created during checkpointing via CRI.

To restore a container via Kubernetes we convert the tar archive to an
OCI image as described in the kubernetes.io blog post from above. Using
this OCI image it is possible to restore a container in Kubernetes.

At this point I think it should be doable to restore containers in
CRI-O and containerd no matter if they have been created by containerd or
CRI-O. The biggest difference is the container metadata and that can
be adapted during restore.

Open items:

 * It is not clear to me why restoring a container in containerd goes
   through task/Create(). But as the restore code already exists this
   change extended the existing code path to restore a container in
   task/Create() to also restore a container through the CRI via
   Create and Start.
 * Automatic image pulling. containerd does not pull images
   automatically if created via the CRI. There is an option in
   crictl to pull images before starting, but that uses the CRI
   image pull interface. It is still a separate pull and create
   operation. Restoring containers from an OCI image is a bit
   different. The checkpoint OCI image does not include the base
   image, but just a reference to the image (NAME@DIGEST).
   Using crictl with pulling will enable the pulling of the
   checkpoint image, but not of the base image the checkpoint is
   based on. So during preparation of the checkpoint containerd
   will automatically pull the base image, but I was not able how
   to pull an image blockingly in containerd. So there is a for
   loop waiting for the container image to appear in the internal
   store. I think this probably can be implemented better.

Anyway, this is a first step towards container restored in Kubernetes
when using containerd.

Signed-off-by: Adrian Reber <[email protected]>
echo "$ctr_id"
rm -f "$RESTORE_JSON" "$RESTORE_POD_JSON"
echo -n "--> Start container from checkpoint: "
crictl start "$ctr_id"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a question. Without testing whether the files and process status are the same as before, it may be impossible to determine if the restore was successful. It is recommended to start a process that increments by 1 per second in the original container, modify some files within it, and then compare the restored container with the original container to see if they are identical.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what is the question - we check the logs of the container after restore:

	lines_after=$(crictl logs "$ctr_id" | wc -l)
	if [ "$lines_before" -ge "$lines_after" ]; then
		echo "number of lines after checkpointing ($lines_after) " \
			"should be larger than before checkpointing ($lines_before)"
		false
	fi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.