Skip to content

Instantly share code, notes, and snippets.

@yosshy
Created December 30, 2022 08:06
Show Gist options
  • Save yosshy/fa79bbae9596d4725c18c5eb9291bb22 to your computer and use it in GitHub Desktop.
Save yosshy/fa79bbae9596d4725c18c5eb9291bb22 to your computer and use it in GitHub Desktop.
How to reproduce running a pod on another node after a worker node down
## Version Information
- trident_version: v21.04.0
- OpenShift(K8s) version
<pre>
$ oc version -o yaml
clientVersion:
buildDate: "2022-08-02T07:42:48Z"
compiler: gc
gitCommit: 70750898e45ff4a349995b08e1d64a359e4c4880
gitTreeState: clean
gitVersion: 4.11.0-202208020706.p0.g7075089.assembly.stream-7075089
goVersion: go1.18.4
major: ""
minor: ""
platform: linux/amd64
kustomizeVersion: v4.5.4
openshiftVersion: 4.11.12
releaseClientVersion: 4.11.0
serverVersion:
buildDate: "2022-10-11T13:02:03Z"
compiler: gc
gitCommit: eddac29feb4bb46b99fb570999324e582d761a66
gitTreeState: clean
gitVersion: v1.24.6+5157800 <- based on Kubernetes v1.24.6
goVersion: go1.18.4
major: "1"
minor: "24"
platform: linux/amd64
</pre>
## How to reproduce running a pod on another node after a worker node down
1. Created a pod attached a PV(RWO) of Trident iSCSI.
[Tue Dec 20 04:50:59 root@hawk2p7 user]# oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-1-deploy 0/1 Completed 0 64s 10.131.0.10 worker0.hawk2p7-1.example.com <none> <none>
mysql-1-sxf2z 1/1 Running 0 61s 10.131.0.11 worker0.hawk2p7-1.example.com <none> <none>
[Tue Dec 20 04:51:01 root@hawk2p7 user]# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql Bound pvc-e38bc1dd-c073-411b-8ff6-fd6325595648 1Gi RWO csi-ontap-san 73s
2. Confirmed VolumeAttachment of the PVC above was associated with worker0
[Tue Dec 20 04:51:09 root@hawk2p7 user]# oc get volumeattachment | grep pvc-e38bc1dd-c073-411b-8ff6-fd6325595648
csi-29ef7ca8f875644d6fb78af5135c3b0c72126d92b4fb846bba6c92fbb97209e6 csi.trident.netapp.io pvc-e38bc1dd-c073-411b-8ff6-fd6325595648 worker0.hawk2p7-1.example.com true 84s
3. Stopped worker0 that the pod had run on.
[Tue Dec 20 04:51:38 root@hawk2p7 user]# ssh [email protected] sudo poweroff -fd
Powering off.
^CKilled by signal 2.
4. Ran oc command periodically with 'while'.
5 minuts later, the original pod will get Terminating and a new pod will run on worker0, but the process will stop at ContainerCreating.
[Tue Dec 20 04:54:20 root@hawk2p7 user]# set -x
[Tue Dec 20 04:54:21 root@hawk2p7 user]# while :; do date; oc get nodes; oc get pods -o wide; oc get volumeattachment | grep pvc-e38bc1dd-c073-411b-8ff6-fd6325595648; echo; sleep 30; done
+ :
+ date
Tue Dec 20 04:54:22 UTC 2022
+ oc get nodes
NAME STATUS ROLES AGE VERSION
master0.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master1.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master2.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
worker0.hawk2p7-1.example.com NotReady worker 2y25d v1.24.6+5157800
worker1.hawk2p7-1.example.com Ready worker 2y25d v1.24.6+5157800
worker2.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker3.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker4.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
+ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-1-deploy 0/1 Completed 0 4m25s 10.131.0.10 worker0.hawk2p7-1.example.com <none> <none>
mysql-1-sxf2z 1/1 Running 0 4m22s 10.131.0.11 worker0.hawk2p7-1.example.com <none> <none>
+ oc get volumeattachment
+ grep --color=auto pvc-e38bc1dd-c073-411b-8ff6-fd6325595648
csi-29ef7ca8f875644d6fb78af5135c3b0c72126d92b4fb846bba6c92fbb97209e6 csi.trident.netapp.io pvc-e38bc1dd-c073-411b-8ff6-fd6325595648 worker0.hawk2p7-1.example.com true 4m23s
+ echo
...
+ date
Tue Dec 20 04:58:25 UTC 2022
+ oc get nodes
NAME STATUS ROLES AGE VERSION
master0.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master1.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master2.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
worker0.hawk2p7-1.example.com NotReady worker 2y25d v1.24.6+5157800
worker1.hawk2p7-1.example.com Ready worker 2y25d v1.24.6+5157800
worker2.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker3.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker4.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
+ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-1-lwbmx 0/1 ContainerCreating 0 4s <none> worker4.hawk2p7-1.example.com <none> <none>
mysql-1-sxf2z 1/1 Terminating 0 8m25s 10.131.0.11 worker0.hawk2p7-1.example.com <none> <none>
+ grep --color=auto pvc-e38bc1dd-c073-411b-8ff6-fd6325595648
+ oc get volumeattachment
csi-29ef7ca8f875644d6fb78af5135c3b0c72126d92b4fb846bba6c92fbb97209e6 csi.trident.netapp.io pvc-e38bc1dd-c073-411b-8ff6-fd6325595648 worker0.hawk2p7-1.example.com true 8m25s
+ echo
...
+ :
+ date
Tue Dec 20 05:09:02 UTC 2022
+ oc get nodes
NAME STATUS ROLES AGE VERSION
master0.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master1.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master2.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
worker0.hawk2p7-1.example.com NotReady worker 2y25d v1.24.6+5157800
worker1.hawk2p7-1.example.com Ready worker 2y25d v1.24.6+5157800
worker2.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker3.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker4.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
+ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-1-lwbmx 0/1 ContainerCreating 0 10m <none> worker4.hawk2p7-1.example.com <none> <none>
mysql-1-sxf2z 1/1 Terminating 0 19m 10.131.0.11 worker0.hawk2p7-1.example.com <none> <none>
+ oc get volumeattachment
+ grep --color=auto pvc-e38bc1dd-c073-411b-8ff6-fd6325595648
csi-29ef7ca8f875644d6fb78af5135c3b0c72126d92b4fb846bba6c92fbb97209e6 csi.trident.netapp.io pvc-e38bc1dd-c073-411b-8ff6-fd6325595648 worker0.hawk2p7-1.example.com true 19m
+ echo
5. No more change, so I deleted it forcefully.
[Tue Dec 20 05:09:06 root@hawk2p7 user]# oc delete pods mysql-1-sxf2z --force
+ oc delete pods mysql-1-sxf2z --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "mysql-1-sxf2z" force deleted
6. 6 minuts later, deleting VolumeAttachment had started.
[Tue Dec 20 05:09:17 root@hawk2p7 user]# while :; do date; oc get nodes; oc get pods -o wide; oc get volumeattachment | grep pvc-e38bc1dd-c073-411b-8ff6-fd6325595648; echo; sleep 30; done
+ :
+ date
Tue Dec 20 05:09:18 UTC 2022
+ oc get nodes
NAME STATUS ROLES AGE VERSION
master0.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master1.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master2.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
worker0.hawk2p7-1.example.com NotReady worker 2y25d v1.24.6+5157800
worker1.hawk2p7-1.example.com Ready worker 2y25d v1.24.6+5157800
worker2.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker3.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker4.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
+ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-1-lwbmx 0/1 ContainerCreating 0 10m <none> worker4.hawk2p7-1.example.com <none> <none>
+ grep --color=auto pvc-e38bc1dd-c073-411b-8ff6-fd6325595648
+ oc get volumeattachment
csi-29ef7ca8f875644d6fb78af5135c3b0c72126d92b4fb846bba6c92fbb97209e6 csi.trident.netapp.io pvc-e38bc1dd-c073-411b-8ff6-fd6325595648 worker0.hawk2p7-1.example.com true 19m
+ echo
...
+ sleep 30
+ :
+ date
Tue Dec 20 05:15:53 UTC 2022
+ oc get nodes
NAME STATUS ROLES AGE VERSION
master0.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master1.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
master2.hawk2p7-1.example.com Ready master 2y25d v1.24.6+5157800
worker0.hawk2p7-1.example.com NotReady worker 2y25d v1.24.6+5157800
worker1.hawk2p7-1.example.com Ready worker 2y25d v1.24.6+5157800
worker2.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker3.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
worker4.hawk2p7-1.example.com Ready infra,worker 2y25d v1.24.6+5157800
+ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-1-lwbmx 1/1 Running 0 17m 10.131.2.24 worker4.hawk2p7-1.example.com <none> <none>
+ oc get volumeattachment
+ grep --color=auto pvc-e38bc1dd-c073-411b-8ff6-fd6325595648
csi-07aaff6373221b1121898d32599ce575bde6db9cb6ae9e66615acc37330a595e csi.trident.netapp.io pvc-e38bc1dd-c073-411b-8ff6-fd6325595648 worker4.hawk2p7-1.example.com true 39s
+ echo
## Logs of a kube-controller-manager
Forced PV detaching was started at 2022-12-20T05:15:14Z; 6 minuts later from forced pod deletion because of maxWaitForUnmountDuration(6m).
[Tue Dec 20 14:38:59 root@hawk2p7 user]# oc logs -n openshift-kube-controller-manager kube-controller-manager-master2.hawk2p7-1.example.com --all-containers | grep pvc-e38bc1dd-c073-411b-8ff6-fd6325595648 | grep -i -e attach -e detach | grep -v MultiAttachErrorReported
I1220 04:50:01.975114 1 operation_generator.go:398] AttachVolume.Attach succeeded for volume "pvc-e38bc1dd-c073-411b-8ff6-fd6325595648" (UniqueName: "kubernetes.io/csi/csi.trident.netapp.io^pvc-e38bc1dd-c073-411b-8ff6-fd6325595648") from node "worker0.hawk2p7-1.example.com"
I1220 04:50:01.975479 1 event.go:294] "Event occurred" object="testpv/mysql-1-sxf2z" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" "
I1220 04:58:21.505039 1 event.go:294] "Event occurred" object="testpv/mysql-1-lwbmx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" Volume is already used by pod(s) mysql-1-sxf2z"
I1220 05:15:14.371928 1 reconciler.go:256] "attacherDetacher.DetachVolume started: this volume is not safe to detach, but maxWaitForUnmountDuration expired, force detaching" duration="6m0s" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi.trident.netapp.io^pvc-e38bc1dd-c073-411b-8ff6-fd6325595648 VolumeSpec:0xc0277657b8 NodeName:worker0.hawk2p7-1.example.com PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:true DetachRequestedTime:2022-12-20 05:09:14.260209418 +0000 UTC m=+7596.636983947}
I1220 05:15:14.949106 1 operation_generator.go:513] DetachVolume.Detach succeeded for volume "pvc-e38bc1dd-c073-411b-8ff6-fd6325595648" (UniqueName: "kubernetes.io/csi/csi.trident.netapp.io^pvc-e38bc1dd-c073-411b-8ff6-fd6325595648") on node "worker0.hawk2p7-1.example.com"
I1220 05:15:16.053958 1 operation_generator.go:398] AttachVolume.Attach succeeded for volume "pvc-e38bc1dd-c073-411b-8ff6-fd6325595648" (UniqueName: "kubernetes.io/csi/csi.trident.netapp.io^pvc-e38bc1dd-c073-411b-8ff6-fd6325595648") from node "worker4.hawk2p7-1.example.com"
I1220 05:15:16.054093 1 event.go:294] "Event occurred" object="testpv/mysql-1-lwbmx" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\"
## Logs of trident-controller
ControllerPublishVolume was called at 2022-12-20T04:50:00Z (creating a pod on the original node) and ControllerUnpublishVolume (to unpublish from worker0) and ControllerPublishVolume (to publish to worker4) were called at 2022-12-20T05:15:16Z (6 minuts later from forced pod deletion).
[Tue Dec 20 05:20:07 root@hawk2p7 user]# oc logs -n trident trident-csi-5885d4fcb4-grqrc --all-containers | grep -e "20[T|.]04:[5][0-9]" -e "20[T|.]05:[01][0-9]" | grep -A 1 -E -e "call: .*(Node|Controller)(Unp|P)ublishVolume" -e "call: .*Node(Uns|S)tageVolume" | grep -B 1 pvc-e38bc1dd-c073-411b-8ff6-fd6325595648
time="2022-12-20T04:50:00Z" level=debug msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" requestID=2074f5db-a9b0-4f7f-b43b-ab87d96d6ec0 requestSource=CSI
time="2022-12-20T04:50:00Z" level=debug msg="GRPC request: volume_id:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" node_id:\"worker0.hawk2p7-1.example.com\" volume_capability:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:\"backendUUID\" value:\"791ba80e-d067-4efd-95cd-d4ea7e4a98c4\" > volume_context:<key:\"internalName\" value:\"trident_pvc_e38bc1dd_c073_411b_8ff6_fd6325595648\" > volume_context:<key:\"name\" value:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" > volume_context:<key:\"protocol\" value:\"block\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1671151654191-8081-csi.trident.netapp.io\" > " requestID=2074f5db-a9b0-4f7f-b43b-ab87d96d6ec0 requestSource=CSI
--
time="2022-12-20T04:50:01Z" level=debug msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" requestID=dbbf3655-a959-45ac-ac76-c69db4b52b38 requestSource=CSI
time="2022-12-20T04:50:01Z" level=debug msg="GRPC request: volume_id:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" node_id:\"worker0.hawk2p7-1.example.com\" volume_capability:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:\"backendUUID\" value:\"791ba80e-d067-4efd-95cd-d4ea7e4a98c4\" > volume_context:<key:\"internalName\" value:\"trident_pvc_e38bc1dd_c073_411b_8ff6_fd6325595648\" > volume_context:<key:\"name\" value:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" > volume_context:<key:\"protocol\" value:\"block\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1671151654191-8081-csi.trident.netapp.io\" > " requestID=dbbf3655-a959-45ac-ac76-c69db4b52b38 requestSource=CSI
--
time="2022-12-20T05:15:14Z" level=debug msg="GRPC call: /csi.v1.Controller/ControllerUnpublishVolume" requestID=a52bd026-98b2-499b-a953-9ae56716c2e5 requestSource=CSI
time="2022-12-20T05:15:14Z" level=debug msg="GRPC request: volume_id:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" node_id:\"worker0.hawk2p7-1.example.com\" " requestID=a52bd026-98b2-499b-a953-9ae56716c2e5 requestSource=CSI
--
time="2022-12-20T05:15:15Z" level=debug msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" requestID=156b3db0-cdcc-4948-a765-39ff2bd6901d requestSource=CSI
time="2022-12-20T05:15:15Z" level=debug msg="GRPC request: volume_id:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" node_id:\"worker4.hawk2p7-1.example.com\" volume_capability:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:\"backendUUID\" value:\"791ba80e-d067-4efd-95cd-d4ea7e4a98c4\" > volume_context:<key:\"internalName\" value:\"trident_pvc_e38bc1dd_c073_411b_8ff6_fd6325595648\" > volume_context:<key:\"name\" value:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" > volume_context:<key:\"protocol\" value:\"block\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1671151654191-8081-csi.trident.netapp.io\" > " requestID=156b3db0-cdcc-4948-a765-39ff2bd6901d requestSource=CSI
--
time="2022-12-20T05:15:15Z" level=debug msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" requestID=3deaf9e2-8e73-4a1c-a0cc-5552e4bf6e5c requestSource=CSI
time="2022-12-20T05:15:15Z" level=debug msg="GRPC request: volume_id:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" node_id:\"worker4.hawk2p7-1.example.com\" volume_capability:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:\"backendUUID\" value:\"791ba80e-d067-4efd-95cd-d4ea7e4a98c4\" > volume_context:<key:\"internalName\" value:\"trident_pvc_e38bc1dd_c073_411b_8ff6_fd6325595648\" > volume_context:<key:\"name\" value:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" > volume_context:<key:\"protocol\" value:\"block\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1671151654191-8081-csi.trident.netapp.io\" > " requestID=3deaf9e2-8e73-4a1c-a0cc-5552e4bf6e5c requestSource=CSI
## Logs of trident daemonset on worker4 running an evacuated pod
NodePublishVolume and NodeStageVolume were called at 2022-12-20T05:15:16Z; 6 minutes later from forced pod deletion.
[Tue Dec 20 05:20:50 root@hawk2p7 user]# oc logs -n trident trident-csi-tss2z --all-containers | grep -e "20[T|.]04:[5][0-9]" -e "20[T|.]05:[01][0-9]" | grep -A 1 -E -e "call: .*(Node|Controller)(Unp|P)ublishVolume" -e "call: .*Node(Uns|S)tageVolume" | grep -B 1 pvc-e38bc1dd-c073-411b-8ff6-fd6325595648
time="2022-12-20T05:15:16Z" level=debug msg="GRPC call: /csi.v1.Node/NodeStageVolume" requestID=33a4270a-066c-4dc4-987b-c48d3b830b6e requestSource=CSI
time="2022-12-20T05:15:16Z" level=debug msg="GRPC request: volume_id:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" publish_context:<key:\"filesystemType\" value:\"ext4\" > publish_context:<key:\"iscsiIgroup\" value:\"trident-hawk2p7-1\" > publish_context:<key:\"iscsiInterface\" value:\"\" > publish_context:<key:\"iscsiLunNumber\" value:\"0\" > publish_context:<key:\"iscsiLunSerial\" value:\"81E4g]PtsAz/\" > publish_context:<key:\"iscsiTargetIqn\" value:\"iqn.1992-08.com.netapp:sn.d95a51f5ee8311eab3bfd039ea207552:vs.9\" > publish_context:<key:\"iscsiTargetPortalCount\" value:\"4\" > publish_context:<key:\"mountOptions\" value:\"\" > publish_context:<key:\"p1\" value:\"172.16.64.76\" > publish_context:<key:\"p2\" value:\"172.16.64.77\" > publish_context:<key:\"p3\" value:\"172.16.64.74\" > publish_context:<key:\"p4\" value:\"172.16.64.75\" > publish_context:<key:\"protocol\" value:\"block\" > publish_context:<key:\"sharedTarget\" value:\"true\" > publish_context:<key:\"useCHAP\" value:\"false\" > staging_target_path:\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.trident.netapp.io/8afaf9e972706d4c3d2cb4091a832b42397a9dbc252268db590a552a5b1f39ef/globalmount\" volume_capability:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:\"backendUUID\" value:\"791ba80e-d067-4efd-95cd-d4ea7e4a98c4\" > volume_context:<key:\"internalName\" value:\"trident_pvc_e38bc1dd_c073_411b_8ff6_fd6325595648\" > volume_context:<key:\"name\" value:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" > volume_context:<key:\"protocol\" value:\"block\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1671151654191-8081-csi.trident.netapp.io\" > " requestID=33a4270a-066c-4dc4-987b-c48d3b830b6e requestSource=CSI
--
time="2022-12-20T05:15:17Z" level=debug msg="GRPC call: /csi.v1.Node/NodePublishVolume" requestID=29a4e0b2-f402-401c-90ac-de1bc3ca91e2 requestSource=CSI
time="2022-12-20T05:15:17Z" level=debug msg="GRPC request: volume_id:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" publish_context:<key:\"filesystemType\" value:\"ext4\" > publish_context:<key:\"iscsiIgroup\" value:\"trident-hawk2p7-1\" > publish_context:<key:\"iscsiInterface\" value:\"\" > publish_context:<key:\"iscsiLunNumber\" value:\"0\" > publish_context:<key:\"iscsiLunSerial\" value:\"81E4g]PtsAz/\" > publish_context:<key:\"iscsiTargetIqn\" value:\"iqn.1992-08.com.netapp:sn.d95a51f5ee8311eab3bfd039ea207552:vs.9\" > publish_context:<key:\"iscsiTargetPortalCount\" value:\"4\" > publish_context:<key:\"mountOptions\" value:\"\" > publish_context:<key:\"p1\" value:\"172.16.64.76\" > publish_context:<key:\"p2\" value:\"172.16.64.77\" > publish_context:<key:\"p3\" value:\"172.16.64.74\" > publish_context:<key:\"p4\" value:\"172.16.64.75\" > publish_context:<key:\"protocol\" value:\"block\" > publish_context:<key:\"sharedTarget\" value:\"true\" > publish_context:<key:\"useCHAP\" value:\"false\" > staging_target_path:\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.trident.netapp.io/8afaf9e972706d4c3d2cb4091a832b42397a9dbc252268db590a552a5b1f39ef/globalmount\" target_path:\"/var/lib/kubelet/pods/727c8d06-4f1b-4e1f-b5d5-1c467f480400/volumes/kubernetes.io~csi/pvc-e38bc1dd-c073-411b-8ff6-fd6325595648/mount\" volume_capability:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:\"backendUUID\" value:\"791ba80e-d067-4efd-95cd-d4ea7e4a98c4\" > volume_context:<key:\"internalName\" value:\"trident_pvc_e38bc1dd_c073_411b_8ff6_fd6325595648\" > volume_context:<key:\"name\" value:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" > volume_context:<key:\"protocol\" value:\"block\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1671151654191-8081-csi.trident.netapp.io\" > " requestID=29a4e0b2-f402-401c-90ac-de1bc3ca91e2 requestSource=CSI
## Logs of trident daemonset on worker0; the original node running the pod
NodeStageVolume and NodePublishVolume were called at 2022-12-20T04:50:09Z; creating the original pod attached a pv.
No log of NodeUnstageVolume and NodeUnpublishVolume found.
[root@worker0 trident-main]# pwd
/var/log/pods/trident_trident-csi-jxgww_27257bb5-6d02-4d8a-a026-57655668113c/trident-main
[root@worker0 trident-main]# grep -A 1 -E -e "call: .*(Node|Controller)(Unp|P)ublishVolume" -e "call: .*Node(Uns|S)tageVolume" *.log | grep -B 1 pvc-e38bc1dd-c073-411b-8ff6-fd6325595648
33.log:2022-12-20T04:50:09.015554547+00:00 stderr F time="2022-12-20T04:50:09Z" level=debug msg="GRPC call: /csi.v1.Node/NodeStageVolume" requestID=d09b7d7c-b307-4786-aa14-e5835bbf55db requestSource=CSI
33.log-2022-12-20T04:50:09.015554547+00:00 stderr F time="2022-12-20T04:50:09Z" level=debug msg="GRPC request: volume_id:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" publish_context:<key:\"filesystemType\" value:\"ext4\" > publish_context:<key:\"iscsiIgroup\" value:\"trident-hawk2p7-1\" > publish_context:<key:\"iscsiInterface\" value:\"\" > publish_context:<key:\"iscsiLunNumber\" value:\"0\" > publish_context:<key:\"iscsiLunSerial\" value:\"81E4g]PtsAz/\" > publish_context:<key:\"iscsiTargetIqn\" value:\"iqn.1992-08.com.netapp:sn.d95a51f5ee8311eab3bfd039ea207552:vs.9\" > publish_context:<key:\"iscsiTargetPortalCount\" value:\"4\" > publish_context:<key:\"mountOptions\" value:\"\" > publish_context:<key:\"p1\" value:\"172.16.64.76\" > publish_context:<key:\"p2\" value:\"172.16.64.77\" > publish_context:<key:\"p3\" value:\"172.16.64.74\" > publish_context:<key:\"p4\" value:\"172.16.64.75\" > publish_context:<key:\"protocol\" value:\"block\" > publish_context:<key:\"sharedTarget\" value:\"true\" > publish_context:<key:\"useCHAP\" value:\"false\" > staging_target_path:\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.trident.netapp.io/8afaf9e972706d4c3d2cb4091a832b42397a9dbc252268db590a552a5b1f39ef/globalmount\" volume_capability:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:\"backendUUID\" value:\"791ba80e-d067-4efd-95cd-d4ea7e4a98c4\" > volume_context:<key:\"internalName\" value:\"trident_pvc_e38bc1dd_c073_411b_8ff6_fd6325595648\" > volume_context:<key:\"name\" value:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" > volume_context:<key:\"protocol\" value:\"block\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1671151654191-8081-csi.trident.netapp.io\" > " requestID=d09b7d7c-b307-4786-aa14-e5835bbf55db requestSource=CSI
--
33.log:2022-12-20T04:50:11.712159189+00:00 stderr F time="2022-12-20T04:50:11Z" level=debug msg="GRPC call: /csi.v1.Node/NodePublishVolume" requestID=77dced93-eede-4aee-8b26-e70c8047c82e requestSource=CSI
33.log-2022-12-20T04:50:11.712159189+00:00 stderr F time="2022-12-20T04:50:11Z" level=debug msg="GRPC request: volume_id:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" publish_context:<key:\"filesystemType\" value:\"ext4\" > publish_context:<key:\"iscsiIgroup\" value:\"trident-hawk2p7-1\" > publish_context:<key:\"iscsiInterface\" value:\"\" > publish_context:<key:\"iscsiLunNumber\" value:\"0\" > publish_context:<key:\"iscsiLunSerial\" value:\"81E4g]PtsAz/\" > publish_context:<key:\"iscsiTargetIqn\" value:\"iqn.1992-08.com.netapp:sn.d95a51f5ee8311eab3bfd039ea207552:vs.9\" > publish_context:<key:\"iscsiTargetPortalCount\" value:\"4\" > publish_context:<key:\"mountOptions\" value:\"\" > publish_context:<key:\"p1\" value:\"172.16.64.76\" > publish_context:<key:\"p2\" value:\"172.16.64.77\" > publish_context:<key:\"p3\" value:\"172.16.64.74\" > publish_context:<key:\"p4\" value:\"172.16.64.75\" > publish_context:<key:\"protocol\" value:\"block\" > publish_context:<key:\"sharedTarget\" value:\"true\" > publish_context:<key:\"useCHAP\" value:\"false\" > staging_target_path:\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.trident.netapp.io/8afaf9e972706d4c3d2cb4091a832b42397a9dbc252268db590a552a5b1f39ef/globalmount\" target_path:\"/var/lib/kubelet/pods/6f411457-0e5a-445e-9c78-a4464dae4bc6/volumes/kubernetes.io~csi/pvc-e38bc1dd-c073-411b-8ff6-fd6325595648/mount\" volume_capability:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:\"backendUUID\" value:\"791ba80e-d067-4efd-95cd-d4ea7e4a98c4\" > volume_context:<key:\"internalName\" value:\"trident_pvc_e38bc1dd_c073_411b_8ff6_fd6325595648\" > volume_context:<key:\"name\" value:\"pvc-e38bc1dd-c073-411b-8ff6-fd6325595648\" > volume_context:<key:\"protocol\" value:\"block\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1671151654191-8081-csi.trident.netapp.io\" > " requestID=77dced93-eede-4aee-8b26-e70c8047c82e requestSource=CSI
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment