Timeouts lors des appels à l'apiserver
Dans la configuration déployée, les agents rencontrent de nombreux timeouts en appelant l'api server :
E1101 15:11:30.266267 1 kubelet_node_status.go:442] Error updating node status, will retry: failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"NetworkUnavailable\"},{\"type\":\"MemoryPressure\"},{\"type\":\"DiskPressure\"},{\"type\":\"PIDPressure\"},{\"type\":\"Ready\"}],\"conditions\":[{\"lastHeartbeatTime\":\"2020-11-01T15:11:20Z\",\"type\":\"MemoryPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:11:20Z\",\"type\":\"DiskPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:11:20Z\",\"type\":\"PIDPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:11:20Z\",\"type\":\"Ready\"}],\"images\":[{\"names\":[\"docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0\",\"docker.io/rancher/coredns-coredns:1.6.9\"],\"sizeBytes\":13403638},{\"names\":[\"docker.io/rancher/local-path-provisioner@sha256:40cb8c984c1759f1860eee088035040f47051c959a6d07cdb126e132c6f43b45\",\"docker.io/rancher/local-path-provisioner:v0.0.14\"],\"sizeBytes\":13367922},{\"names\":[\"docker.io/library/alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a\",\"docker.io/library/alpine:latest\"],\"sizeBytes\":2800534},{\"names\":[\"docker.io/rancher/pause@sha256:d22591b61e9c2b52aecbf07106d5db313c4f178e404d660b32517b18fcbf0144\",\"docker.io/rancher/pause:3.1\"],\"sizeBytes\":326597}]}}" for node "nitrique": Patch "https://127.0.0.1:40089/api/v1/nodes/nitrique/status?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
W1101 15:11:40.407483 1 status_manager.go:566] Failed to update status for pod "coredns-66c464876b-gdw54_kube-system(a836631f-1079-4f44-8fe8-6e2074c5b785)": failed to patch status "{\"metadata\":{\"uid\":\"a836631f-1079-4f44-8fe8-6e2074c5b785\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"$setElementOrder/podIPs\":[{\"ip\":\"10.21.1.4\"}],\"conditions\":[{\"lastTransitionTime\":\"2020-11-01T12:21:53Z\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2020-11-01T12:21:53Z\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"containerd://3653932a73c5610aa62d9273628e82b56d766fb1b890c288012a253b0ca8b837\",\"image\":\"docker.io/rancher/coredns-coredns:1.6.9\",\"imageID\":\"docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0\",\"lastState\":{\"terminated\":{\"containerID\":\"containerd://2613732e97c847247896b28f53b9f8f786b0f73efe5a21cf44e8cedac1bf790e\",\"exitCode\":255,\"finishedAt\":\"2020-11-01T12:21:45Z\",\"reason\":\"Unknown\",\"startedAt\":\"2020-10-31T14:13:56Z\"}},\"name\":\"coredns\",\"ready\":true,\"restartCount\":1,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2020-11-01T12:21:52Z\"}}}],\"podIP\":\"10.21.1.4\",\"podIPs\":[{\"ip\":\"10.21.1.4\"},{\"$patch\":\"delete\",\"ip\":\"10.21.1.3\"}]}}" for pod "kube-system"/"coredns-66c464876b-gdw54": Patch "https://127.0.0.1:40089/api/v1/namespaces/kube-system/pods/coredns-66c464876b-gdw54/status": read tcp 127.0.0.1:51762->127.0.0.1:40089: use of closed network connection
E1101 15:11:40.408928 1 kubelet_node_status.go:442] Error updating node status, will retry: failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"NetworkUnavailable\"},{\"type\":\"MemoryPressure\"},{\"type\":\"DiskPressure\"},{\"type\":\"PIDPressure\"},{\"type\":\"Ready\"}],\"conditions\":[{\"lastHeartbeatTime\":\"2020-11-01T15:11:30Z\",\"type\":\"MemoryPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:11:30Z\",\"type\":\"DiskPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:11:30Z\",\"type\":\"PIDPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:11:30Z\",\"type\":\"Ready\"}],\"images\":[{\"names\":[\"docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0\",\"docker.io/rancher/coredns-coredns:1.6.9\"],\"sizeBytes\":13403638},{\"names\":[\"docker.io/rancher/local-path-provisioner@sha256:40cb8c984c1759f1860eee088035040f47051c959a6d07cdb126e132c6f43b45\",\"docker.io/rancher/local-path-provisioner:v0.0.14\"],\"sizeBytes\":13367922},{\"names\":[\"docker.io/library/alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a\",\"docker.io/library/alpine:latest\"],\"sizeBytes\":2800534},{\"names\":[\"docker.io/rancher/pause@sha256:d22591b61e9c2b52aecbf07106d5db313c4f178e404d660b32517b18fcbf0144\",\"docker.io/rancher/pause:3.1\"],\"sizeBytes\":326597}]}}" for node "nitrique": Patch "https://127.0.0.1:40089/api/v1/nodes/nitrique/status?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1101 15:11:50.412849 1 controller.go:178] failed to update node lease, error: Put "https://127.0.0.1:40089/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/nitrique?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
W1101 15:11:50.561433 1 status_manager.go:566] Failed to update status for pod "test-7c6599864d-9dtlg_tests-kaiyou(6d4b8c7c-73a3-40cb-88df-df394dbb1d9b)": failed to patch status "{\"metadata\":{\"uid\":\"6d4b8c7c-73a3-40cb-88df-df394dbb1d9b\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastProbeTime\":null,\"lastTransitionTime\":\"2020-11-01T15:08:40Z\",\"status\":\"True\",\"type\":\"Initialized\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2020-11-01T15:08:42Z\",\"status\":\"True\",\"type\":\"Ready\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2020-11-01T15:08:42Z\",\"status\":\"True\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"containerd://e3eb71a10baaa7a16d54bd42c7fa751cfc166fc9c6a8f226a698f96f1163b1bf\",\"image\":\"docker.io/library/alpine:latest\",\"imageID\":\"docker.io/library/alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a\",\"lastState\":{},\"name\":\"test\",\"ready\":true,\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2020-11-01T15:08:42Z\"}}}],\"hostIP\":\"10.30.151.137\",\"phase\":\"Running\",\"podIP\":\"10.21.1.6\",\"podIPs\":[{\"ip\":\"10.21.1.6\"}],\"startTime\":\"2020-11-01T15:08:40Z\"}}" for pod "tests-kaiyou"/"test-7c6599864d-9dtlg": Patch "https://127.0.0.1:40089/api/v1/namespaces/tests-kaiyou/pods/test-7c6599864d-9dtlg/status": read tcp 127.0.0.1:51828->127.0.0.1:40089: use of closed network connection
E1101 15:11:50.563446 1 kubelet_node_status.go:442] Error updating node status, will retry: failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"NetworkUnavailable\"},{\"type\":\"MemoryPressure\"},{\"type\":\"DiskPressure\"},{\"type\":\"PIDPressure\"},{\"type\":\"Ready\"}],\"conditions\":[{\"lastHeartbeatTime\":\"2020-11-01T15:11:40Z\",\"type\":\"MemoryPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:11:40Z\",\"type\":\"DiskPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:11:40Z\",\"type\":\"PIDPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:11:40Z\",\"type\":\"Ready\"}],\"images\":[{\"names\":[\"docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0\",\"docker.io/rancher/coredns-coredns:1.6.9\"],\"sizeBytes\":13403638},{\"names\":[\"docker.io/rancher/local-path-provisioner@sha256:40cb8c984c1759f1860eee088035040f47051c959a6d07cdb126e132c6f43b45\",\"docker.io/rancher/local-path-provisioner:v0.0.14\"],\"sizeBytes\":13367922},{\"names\":[\"docker.io/library/alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a\",\"docker.io/library/alpine:latest\"],\"sizeBytes\":2800534},{\"names\":[\"docker.io/rancher/pause@sha256:d22591b61e9c2b52aecbf07106d5db313c4f178e404d660b32517b18fcbf0144\",\"docker.io/rancher/pause:3.1\"],\"sizeBytes\":326597}]}}" for node "nitrique": Patch "https://127.0.0.1:40089/api/v1/nodes/nitrique/status?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1101 15:11:50.563583 1 kubelet_node_status.go:429] Unable to update node status: update node status exceeds retry count
W1101 15:11:51.126799 1 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
E1101 15:12:10.699733 1 kubelet_node_status.go:442] Error updating node status, will retry: failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"NetworkUnavailable\"},{\"type\":\"MemoryPressure\"},{\"type\":\"DiskPressure\"},{\"type\":\"PIDPressure\"},{\"type\":\"Ready\"}],\"conditions\":[{\"lastHeartbeatTime\":\"2020-11-01T15:12:00Z\",\"type\":\"MemoryPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:12:00Z\",\"type\":\"DiskPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:12:00Z\",\"type\":\"PIDPressure\"},{\"lastHeartbeatTime\":\"2020-11-01T15:12:00Z\",\"type\":\"Ready\"}],\"images\":[{\"names\":[\"docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0\",\"docker.io/rancher/coredns-coredns:1.6.9\"],\"sizeBytes\":13403638},{\"names\":[\"docker.io/rancher/local-path-provisioner@sha256:40cb8c984c1759f1860eee088035040f47051c959a6d07cdb126e132c6f43b45\",\"docker.io/rancher/local-path-provisioner:v0.0.14\"],\"sizeBytes\":13367922},{\"names\":[\"docker.io/library/alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a\",\"docker.io/library/alpine:latest\"],\"sizeBytes\":2800534},{\"names\":[\"docker.io/rancher/pause@sha256:d22591b61e9c2b52aecbf07106d5db313c4f178e404d660b32517b18fcbf0144\",\"docker.io/rancher/pause:3.1\"],\"sizeBytes\":326597}]}}" for node "nitrique": Patch "https://127.0.0.1:40089/api/v1/nodes/nitrique/status?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Ces timeouts n'empêchent pas le fonctionnement du cluster mais le ralentissent considérablement, voire l'état affiché par l'apiserver est très différent de la réalité, avec jusqu'à plusieurs minutes de décalage.
Edited by kaiyou