Conversation
|
Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test". This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry. Otherwise, if this message is too spammy, please complain to ixdy. |
2 similar comments
|
Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test". This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry. Otherwise, if this message is too spammy, please complain to ixdy. |
|
Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test". This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry. Otherwise, if this message is too spammy, please complain to ixdy. |
1ab1f98 to
918de2b
Compare
There was a problem hiding this comment.
This is wrong. Volumes in Released phase must be recycled or deleted first before binding to another claim, even if it has the same namespace+name!
We got bitten by this in several demos - you create PVC and start a pod with say MySQL, show that your app is running and you can add some data to your database. Now, when you delete your pod and PVC and create it again (from the same yaml files), you must get fresh MySQL with no data, not the old one.
There was a problem hiding this comment.
What if the PV's persistentVolumeReclaimPolicy = "Retain" and the pv has a pre-defined claimRef? There is no recycling or deleting in this case. Does it seem reasonable to add this code above:
if volume.Spec.PersistentVolumeReclaimPolicy == api.PersistentVolumeReclaimRetain {
_, err := binderClient.GetPersistentVolumeClaim(volume.Spec.ClaimRef.Namespace, volume.Spec.ClaimRef.Name)
...
}
There was a problem hiding this comment.
Volumes in Retain phase must stay retain forever and never get bound again. See the example above - if I create new pvc (and pod), I must not see any old data there.
That's a feature, not a bug. Se line comment somewhere above for an example. |
|
Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test". This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry. Otherwise, if this message is too spammy, please complain to ixdy. |
|
@jsafrane I wrote ". fixes PV status not being changed from Released -> Bound for a pv with a pre-binding claimRef.", and you replied: |
918de2b to
02eedda
Compare
Yes, volumes with |
|
@jsafrane you replied: Yes, volumes with Retain policy should stay Released forever and never get bound again. There is some old data on the volume. Create claim3 (volumeName="") and create PV3 (claimRef -> "claim3" and ReclaimPolicy=Retain): 20 minutes later... PV remains Released and claim remains Pending and no pod can use the claim. If I change PV3's ClaimPolicy from Retain to Recycle then PV3 quickly binds: This seems like a bug related to Retain? |
02eedda to
23ce76a
Compare
Volumes get to |
|
@jsafrane you wrote:
Yes, several days ago the volume was bound. But since then the volume was deleted, the claim was deleted, k8s was restarted, etc. Are you saying that if a Retained pv is ever bound in it's life then it can never be claimed again? And, if so, is there a mechanism to reset/release this pv so that it can be a candidate for being claimed again? |
Exactly, admin wants to Retain the volume, so Kubernetes does exactly that. If admin wants the PV to be available for binding again, he/she can use Recycle policy or use dynamic provisioning. The only mechanism to get away from There were some ideas to have a timer on |
|
@jsafrane you wrote:
It looks like rm -rf is issued as part of the default reycling code, and, unless I've missed something, this would be a disaster for shared (nfs, glusterfs) storage. So Recycle (how I understand it to be implemented) is not an option becuse the data must be preserved. Retain is also not an option because the storage admin wants the pv to be re-claimable at some point. [That point could be triggered by deleting the pv, or by a new command (eg
Are you saying Jan, I feel that the use case for 1) preserving data in a shared pv, 2) being able to re-claim a pv that was previously claimed but has since been deleted or somehow released is important in the world of static provisioned, legacy, shared storage. Do you think this would be a worthwhile topic for the next k8s storage community SIG meeting? Also, I really appreciate your comments/feedback. I know how busy you are and your comments have been helpful! |
For the old controller, you set
Sure. I'd be interested if this is useful use case. We can implement it, however I am not sure we should and what is priority of this request. I still did not get an answer how it should work in reality. What objects should a user in project A create/modify/delete to "pass" a PV to project B + what objects should a user in project B create. And how would we prevent user in project C to steal this PV with potentially sensitive data. |
|
@jsafrane you wrote:
Note: I don't see pv.StatusClaimRef (or pv.Status.ClaimRef) in the I did the above edit to a live PV with a pre-bound claimRef and reclaimPolicy=Retain. The matching claim can either be running (status=Pending), or the claim can be deleted and re-created after the pv edit. In both cases, the pv Phase immediately went back to Released but the claim's status is Bound . See: So, for me, this edit did not work to cause the PV to be re-claimable. |
|
@jsafrane you asked:
Today, if I 1) create a PV which is not pre-bound and has a ReclaimPolicy of Retain, 2) create a matching claim (both are Bound), 3) delete both: I can re-create the pv and its status.Phase is Available. So, for the non-prebinding case the If I do the same except that the pv is pre-bound to the matching claim, the pv is stuck in Released and cannot be claimed again. The only difference is pre-binding vs controller binding. So You've been discussing the concern of making Retained pvs re-claimable (who has access to the data, etc), yet this is exactly what is supported today for controller-bound pvs. My question really is why does pre-binding affect the ability to re-claim a deleted pv? But to answer your question quoted above: I'd say we need to do exactly what is done today for the controller-binding case. Is that reasonable? |
4e1909d to
1bb8533
Compare
|
@jsafrane I think my arguments above are not good. I'm coming from the perspective that a PVs' claimRef should not be required to include the target claim's UID -- and this pr tries to address that. If you do include the claim's UID in the pv spec then there is no difference between controller-bound vs. pre-bound PVs in terms of them being re-claimable after being deleted. |
|
@jsafrane given the refactor, should we just close this? |
|
@thockin one aspect of this pr is not requiring the pv to have the uid of the claim in it's claimRef. I'm not sure if this is idea is supported by the community, or if @jsafrane 's pr implements this. From reviewing his code it looks like claim matching (findBestMatchForClaim) will bind even if the pv does not define the claim's uid, but there are uid checks in other parts of the code so I don't know for certain. |
|
Jan's PR implements the "correct" binding. No UID needed if the user specified the binding. I'll close this for now. Once his PR is in, if this doesn't work, please reopen! |
Addresses issues #21498 and #23615.
. fixes PV status not being changed from Released -> Bound for a pv with a pre-binding claimRef.
. improves pvc->pv binding time for cases where the claim is created before the matching pv, and the pv has a pre-binding claimRef.
. adds claim.ClaimRef (to get uid) to pv in syncClaim()'s claimPending case.
. supports volumeName matching in findByClaim(). Note: a pv with a claimRef has precedence over a claim with volumeName.
Note: I realize that there is a major controller consolidation effort and that the pv-pvc binding code is being rewritten. Still, I'd appreciate comments/suggestions based on this pr (to learn from), and perhaps a chance that this pr can be merged if it looks reasonable.
This change is