Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubelet: Make service environment variables optional #68754

Merged

Conversation

bradhoekstra
Copy link
Contributor

@bradhoekstra bradhoekstra commented Sep 17, 2018

What this PR does / why we need it:
Services should be accessed by DNS, not by environment variables which are associated with the deprecated --link option in Docker. In this change, those variables can be optionally disabled by setting enableServiceLinks in the PodSpec to false.

Which issue(s) this PR fixes
Fixes #60099

Special notes for your reviewer:

Release note:

Make service environment variables optional

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. area/kubectl area/kubelet kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels Sep 17, 2018
@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Sep 17, 2018
@bradhoekstra
Copy link
Contributor Author

/cc thockin

@bradhoekstra
Copy link
Contributor Author

/assign thockin

@bradhoekstra bradhoekstra changed the title Optional service env variables kubelet: Make service environment variables optional Sep 21, 2018
@neolit123
Copy link
Member

revert PR is on hold here:
#69016

@bradhoekstra
Copy link
Contributor Author

Apologies for the breakage. I don't understand these tests enough to know how I could have broken them; I didn't have any insights from a quick look at the testgrid logs.

Let me know if there is anything I can do to help clean this up.

@neolit123
Copy link
Member

neolit123 commented Sep 25, 2018

@bradhoekstra

outside of e2e, the change is also reproducible locally on a single control-plane node cluster, if you apply flannel or weave as CNI, against a kubelet that is built using current master.

the kube-proxy and CNI pods stay in a ContainerCreating state and there is no other relevant information.

can you test the change locally?

@bradhoekstra
Copy link
Contributor Author

Ok, I will try to reproduce locally tomorrow.

@thockin
Copy link
Member

thockin commented Sep 25, 2018

My guess is that those 2 need some FOO_SERVICE_HOST variable and are not getting it.

@bboreham
Copy link
Contributor

Weave Net calls InClusterConfig() which requires environment variables KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST. I expect kube-proxy is the same.

@bboreham
Copy link
Contributor

Except

stay in a ContainerCreating state

they won't have run any code yet. So the problem must be inside kubelet.

You're noticing it on kube-proxy and Weave Net pods because they run in the host namespace; everything else waits on the pod network to be available.

@neolit123
Copy link
Member

this e2e test that i originally reported turned green, which suggests a flake there:
https://k8s-testgrid.appspot.com/sig-cluster-lifecycle-all#kubeadm-gce-master

at the same time, i'm positive that this change breaks CNI (e.g. weave) on my end.
tried i multiple times by reverting and un-reverting this PR and building a kubelet.

known differences between my setup vs the e2e are:

  • ubuntu 17.10 vs ubuntu 16.04.3
  • docker 17.09 vs 1.12.0
  • weave 2.4.1 vs 2.4.0

i will do some more testing later today to try to narrow this down.

@neolit123
Copy link
Member

so testing different CNIs on the setup where the problem can be reproduced:

  • weave 2.4.1 or 2.4.0 does not seem to matter - it breaks for both after this PR, but works before it.
  • calico is the same as weave - breaks after this PR, but works before it.
  • flannel seems broken pre- or post- this PR so excluding it out of the equation for now.

here are more detailed kubelet logs.


commit: 170dcc2

log with calico (failing). i see a panic here.

сеп 26 00:39:12 luboitvbox kubelet[7452]: E0926 00:39:12.789657    7452 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/asm_amd64.s:573
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/panic.go:502
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/panic.go:63
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/signal_unix.go:388
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:553
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:456
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:188
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:111
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go:736
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1664
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:818
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:170
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:179
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:217
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/asm_amd64.s:2361

that would be::

serviceEnv, err := kl.getServiceEnvVarMap(pod.Namespace, *pod.Spec.EnableServiceLinks)

full log:

-- 
-- Unit kubelet.service has finished starting up.
-- 
-- The start-up result is done.
сеп 26 00:39:11 luboitvbox kubelet[7452]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
сеп 26 00:39:11 luboitvbox kubelet[7452]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
сеп 26 00:39:11 luboitvbox kubelet[7452]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
сеп 26 00:39:11 luboitvbox kubelet[7452]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
сеп 26 00:39:11 luboitvbox kubelet[7452]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
сеп 26 00:39:11 luboitvbox kubelet[7452]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.455390    7452 server.go:408] Version: v1.13.0-alpha.0
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.455620    7452 plugins.go:99] No cloud provider specified.
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.459443    7452 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.607140    7452 server.go:667] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.607549    7452 container_manager_linux.go:247] container manager verified user specified cgroup-root exists: []
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.607562    7452 container_manager_linux.go:252] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.607701    7452 container_manager_linux.go:271] Creating device plugin manager: true
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.607724    7452 state_mem.go:36] [cpumanager] initializing new in-memory state store
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.607805    7452 state_mem.go:84] [cpumanager] updated default cpuset: ""
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.607816    7452 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.607926    7452 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.607954    7452 kubelet.go:304] Watching apiserver
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.623328    7452 client.go:75] Connecting to docker on unix:///var/run/docker.sock
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.623351    7452 client.go:104] Start docker client with request timeout=2m0s
сеп 26 00:39:11 luboitvbox kubelet[7452]: W0926 00:39:11.624620    7452 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.624639    7452 docker_service.go:236] Hairpin mode set to "hairpin-veth"
сеп 26 00:39:11 luboitvbox kubelet[7452]: W0926 00:39:11.624712    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:11 luboitvbox kubelet[7452]: W0926 00:39:11.627861    7452 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
сеп 26 00:39:11 luboitvbox kubelet[7452]: W0926 00:39:11.627912    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.627936    7452 docker_service.go:251] Docker cri networking managed by cni
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.686011    7452 docker_service.go:256] Docker Info: &{ID:K2H6:2I6N:FSBZ:S77V:R5CQ:X22B:VYTF:WZ4R:UIKC:HGOT:UCHD:GCR2 Containers:17 ContainersRunning:10 ContainersPaused:0 ContainersStopped:7 Images:110 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:67 SystemTime:2018-09-26T00:39:11.675821507+03:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.13.0-41-generic OperatingSystem:Ubuntu 17.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc42095b9d0 NCPU:4 MemTotal:16819949568 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:luboitvbox Labels:[] ExperimentalBuild:false ServerVersion:17.09.0-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:06b9cb35161009dcb7123345749fef02f7cea8e0 Expected:06b9cb35161009dcb7123345749fef02f7cea8e0} RuncCommit:{ID:3f2f8b84a77f73d38244dd690525642a72156c64 Expected:3f2f8b84a77f73d38244dd690525642a72156c64} InitCommit:{ID:949e6fa Expected:949e6fa} SecurityOptions:[name=apparmor name=seccomp,profile=default]}
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.686242    7452 docker_service.go:269] Setting cgroupDriver to cgroupfs
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.747721    7452 kuberuntime_manager.go:197] Container runtime docker initialized, version: 17.09.0-ce, apiVersion: 1.32.0
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.748825    7452 server.go:1013] Started kubelet
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.749463    7452 server.go:133] Starting to listen on 0.0.0.0:10250
сеп 26 00:39:11 luboitvbox kubelet[7452]: E0926 00:39:11.749591    7452 kubelet.go:1287] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.750025    7452 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.750044    7452 status_manager.go:152] Starting to sync pod status with apiserver
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.750094    7452 kubelet.go:1804] Starting kubelet main sync loop.
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.750111    7452 kubelet.go:1821] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.750381    7452 server.go:318] Adding debug handlers to kubelet server.
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.750451    7452 volume_manager.go:248] Starting Kubelet Volume Manager
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.750569    7452 desired_state_of_world_populator.go:130] Desired state populator starts to run
сеп 26 00:39:11 luboitvbox kubelet[7452]: W0926 00:39:11.751311    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:11 luboitvbox kubelet[7452]: E0926 00:39:11.751393    7452 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.852067    7452 kubelet.go:1821] skipping pod synchronization - [container runtime is down]
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.852103    7452 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.852125    7452 kuberuntime_manager.go:910] updating runtime config through cri with podcidr 192.168.0.0/24
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.852272    7452 docker_service.go:345] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:192.168.0.0/24,},}
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.852391    7452 kubelet_network.go:75] Setting Pod CIDR:  -> 192.168.0.0/24
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.853986    7452 kubelet_node_status.go:70] Attempting to register node luboitvbox
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.867098    7452 kubelet_node_status.go:112] Node luboitvbox was previously registered
сеп 26 00:39:11 luboitvbox kubelet[7452]: I0926 00:39:11.867121    7452 kubelet_node_status.go:73] Successfully registered node luboitvbox
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.052172    7452 kubelet.go:1821] skipping pod synchronization - [container runtime is down]
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.112761    7452 cpu_manager.go:155] [cpumanager] starting with none policy
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.112780    7452 cpu_manager.go:156] [cpumanager] reconciling every 10s
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.112789    7452 policy_none.go:42] [cpumanager] none policy: Start
сеп 26 00:39:12 luboitvbox kubelet[7452]: W0926 00:39:12.114656    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:12 luboitvbox kubelet[7452]: E0926 00:39:12.115162    7452 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.555410    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/181fa9b2700f358f4d673b528b0fe278-usr-share-ca-certificates") pod "kube-controller-manager-luboitvbox" (UID: "181fa9b2700f358f4d673b528b0fe278")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.555530    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/95becca1-c10a-11e8-b4d9-080027a4c94f-kube-proxy") pod "kube-proxy-p6b9v" (UID: "95becca1-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.555745    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/0d9e352cf05fa6461000e1e2cb8663c0-usr-share-ca-certificates") pod "kube-apiserver-luboitvbox" (UID: "0d9e352cf05fa6461000e1e2cb8663c0")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.555826    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d97636b5dd42118dd47d9392325372b2-etcd-data") pod "etcd-luboitvbox" (UID: "d97636b5dd42118dd47d9392325372b2")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.555956    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/181fa9b2700f358f4d673b528b0fe278-k8s-certs") pod "kube-controller-manager-luboitvbox" (UID: "181fa9b2700f358f4d673b528b0fe278")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.556041    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-9rk4n" (UniqueName: "kubernetes.io/secret/95becca1-c10a-11e8-b4d9-080027a4c94f-kube-proxy-token-9rk4n") pod "kube-proxy-p6b9v" (UID: "95becca1-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.556118    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/95beca9b-c10a-11e8-b4d9-080027a4c94f-cni-bin-dir") pod "calico-node-crh9n" (UID: "95beca9b-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.556365    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/181fa9b2700f358f4d673b528b0fe278-usr-local-share-ca-certificates") pod "kube-controller-manager-luboitvbox" (UID: "181fa9b2700f358f4d673b528b0fe278")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.556518    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/95becca1-c10a-11e8-b4d9-080027a4c94f-lib-modules") pod "kube-proxy-p6b9v" (UID: "95becca1-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.556704    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/95beca9b-c10a-11e8-b4d9-080027a4c94f-var-lib-calico") pod "calico-node-crh9n" (UID: "95beca9b-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.556788    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/95beca9b-c10a-11e8-b4d9-080027a4c94f-cni-net-dir") pod "calico-node-crh9n" (UID: "95beca9b-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.557876    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/181fa9b2700f358f4d673b528b0fe278-kubeconfig") pod "kube-controller-manager-luboitvbox" (UID: "181fa9b2700f358f4d673b528b0fe278")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.558020    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/181fa9b2700f358f4d673b528b0fe278-flexvolume-dir") pod "kube-controller-manager-luboitvbox" (UID: "181fa9b2700f358f4d673b528b0fe278")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.558637    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/86937d9046f8de99802ba8e457c51e8b-kubeconfig") pod "kube-scheduler-luboitvbox" (UID: "86937d9046f8de99802ba8e457c51e8b")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.558730    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/0d9e352cf05fa6461000e1e2cb8663c0-ca-certs") pod "kube-apiserver-luboitvbox" (UID: "0d9e352cf05fa6461000e1e2cb8663c0")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.559215    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/0d9e352cf05fa6461000e1e2cb8663c0-etc-pki") pod "kube-apiserver-luboitvbox" (UID: "0d9e352cf05fa6461000e1e2cb8663c0")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.559455    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/95beca9b-c10a-11e8-b4d9-080027a4c94f-lib-modules") pod "calico-node-crh9n" (UID: "95beca9b-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.559665    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-w4zds" (UniqueName: "kubernetes.io/secret/95beca9b-c10a-11e8-b4d9-080027a4c94f-calico-node-token-w4zds") pod "calico-node-crh9n" (UID: "95beca9b-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.559896    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/181fa9b2700f358f4d673b528b0fe278-ca-certs") pod "kube-controller-manager-luboitvbox" (UID: "181fa9b2700f358f4d673b528b0fe278")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.560080    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/0d9e352cf05fa6461000e1e2cb8663c0-usr-local-share-ca-certificates") pod "kube-apiserver-luboitvbox" (UID: "0d9e352cf05fa6461000e1e2cb8663c0")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.560167    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/0d9e352cf05fa6461000e1e2cb8663c0-etc-ca-certificates") pod "kube-apiserver-luboitvbox" (UID: "0d9e352cf05fa6461000e1e2cb8663c0")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.560621    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/95beca9b-c10a-11e8-b4d9-080027a4c94f-var-run-calico") pod "calico-node-crh9n" (UID: "95beca9b-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.560717    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/181fa9b2700f358f4d673b528b0fe278-etc-pki") pod "kube-controller-manager-luboitvbox" (UID: "181fa9b2700f358f4d673b528b0fe278")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.560842    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/95becca1-c10a-11e8-b4d9-080027a4c94f-xtables-lock") pod "kube-proxy-p6b9v" (UID: "95becca1-c10a-11e8-b4d9-080027a4c94f")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.561128    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/181fa9b2700f358f4d673b528b0fe278-etc-ca-certificates") pod "kube-controller-manager-luboitvbox" (UID: "181fa9b2700f358f4d673b528b0fe278")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.561530    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d97636b5dd42118dd47d9392325372b2-etcd-certs") pod "etcd-luboitvbox" (UID: "d97636b5dd42118dd47d9392325372b2")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.561900    7452 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/0d9e352cf05fa6461000e1e2cb8663c0-k8s-certs") pod "kube-apiserver-luboitvbox" (UID: "0d9e352cf05fa6461000e1e2cb8663c0")
сеп 26 00:39:12 luboitvbox kubelet[7452]: I0926 00:39:12.561969    7452 reconciler.go:154] Reconciler: start to sync state
сеп 26 00:39:12 luboitvbox kubelet[7452]: E0926 00:39:12.789657    7452 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/asm_amd64.s:573
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/panic.go:502
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/panic.go:63
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/signal_unix.go:388
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:553
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:456
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:188
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:111
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go:736
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1664
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:818
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:170
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:179
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:217
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/asm_amd64.s:2361
сеп 26 00:39:12 luboitvbox kubelet[7452]: E0926 00:39:12.804102    7452 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/asm_amd64.s:573
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/panic.go:502
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/panic.go:63
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/signal_unix.go:388
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:553
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:456
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:188
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:111
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go:736
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1664
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:818
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:170
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:179
сеп 26 00:39:12 luboitvbox kubelet[7452]: /home/lubo-it/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:217
сеп 26 00:39:12 luboitvbox kubelet[7452]: /usr/local/go/src/runtime/asm_amd64.s:2361
сеп 26 00:39:17 luboitvbox kubelet[7452]: W0926 00:39:17.116905    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:17 luboitvbox kubelet[7452]: E0926 00:39:17.119195    7452 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
сеп 26 00:39:22 luboitvbox kubelet[7452]: W0926 00:39:22.120703    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:22 luboitvbox kubelet[7452]: E0926 00:39:22.120986    7452 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
сеп 26 00:39:27 luboitvbox kubelet[7452]: W0926 00:39:27.124741    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:27 luboitvbox kubelet[7452]: E0926 00:39:27.126066    7452 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
сеп 26 00:39:32 luboitvbox kubelet[7452]: W0926 00:39:32.129124    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:32 luboitvbox kubelet[7452]: E0926 00:39:32.129677    7452 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
сеп 26 00:39:37 luboitvbox kubelet[7452]: W0926 00:39:37.130971    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:37 luboitvbox kubelet[7452]: E0926 00:39:37.131061    7452 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
сеп 26 00:39:42 luboitvbox kubelet[7452]: W0926 00:39:42.134241    7452 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
сеп 26 00:39:42 luboitvbox kubelet[7452]: E0926 00:39:42.134748    7452 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

commit: 7ffaa2f

log with calico (working). no panic here:

``` -- Unit kubelet.service has finished starting up. -- -- The start-up result is done. сеп 26 00:48:01 luboitvbox kubelet[13875]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. сеп 26 00:48:01 luboitvbox kubelet[13875]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. сеп 26 00:48:01 luboitvbox kubelet[13875]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. сеп 26 00:48:01 luboitvbox kubelet[13875]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. сеп 26 00:48:01 luboitvbox kubelet[13875]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. сеп 26 00:48:01 luboitvbox kubelet[13875]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.598346 13875 server.go:408] Version: v1.13.0-alpha.0 сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.598594 13875 plugins.go:99] No cloud provider specified. сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.609452 13875 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.777090 13875 server.go:667] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.777502 13875 container_manager_linux.go:247] container manager verified user specified cgroup-root exists: [] сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.777515 13875 container_manager_linux.go:252] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.777654 13875 container_manager_linux.go:271] Creating device plugin manager: true сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.777703 13875 state_mem.go:36] [cpumanager] initializing new in-memory state store сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.777783 13875 state_mem.go:84] [cpumanager] updated default cpuset: "" сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.777794 13875 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.777931 13875 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.777954 13875 kubelet.go:304] Watching apiserver сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.791209 13875 client.go:75] Connecting to docker on unix:///var/run/docker.sock сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.791235 13875 client.go:104] Start docker client with request timeout=2m0s сеп 26 00:48:01 luboitvbox kubelet[13875]: W0926 00:48:01.793366 13875 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.793414 13875 docker_service.go:236] Hairpin mode set to "hairpin-veth" сеп 26 00:48:01 luboitvbox kubelet[13875]: W0926 00:48:01.793541 13875 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d сеп 26 00:48:01 luboitvbox kubelet[13875]: W0926 00:48:01.795739 13875 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. сеп 26 00:48:01 luboitvbox kubelet[13875]: W0926 00:48:01.795810 13875 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.795837 13875 docker_service.go:251] Docker cri networking managed by cni сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.854913 13875 docker_service.go:256] Docker Info: &{ID:K2H6:2I6N:FSBZ:S77V:R5CQ:X22B:VYTF:WZ4R:UIKC:HGOT:UCHD:GCR2 Containers:20 ContainersRunning:12 ContainersPaused:0 ContainersStopped:8 Images:110 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:75 SystemTime:2018-09-26T00:48:01.844004242+03:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.13.0-41-generic OperatingSystem:Ubuntu 17.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc4208563f0 NCPU:4 MemTotal:16819949568 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:luboitvbox Labels:[] ExperimentalBuild:false ServerVersion:17.09.0-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:06b9cb35161009dcb7123345749fef02f7cea8e0 Expected:06b9cb35161009dcb7123345749fef02f7cea8e0} RuncCommit:{ID:3f2f8b84a77f73d38244dd690525642a72156c64 Expected:3f2f8b84a77f73d38244dd690525642a72156c64} InitCommit:{ID:949e6fa Expected:949e6fa} SecurityOptions:[name=apparmor name=seccomp,profile=default]} сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.855183 13875 docker_service.go:269] Setting cgroupDriver to cgroupfs сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.917757 13875 kuberuntime_manager.go:197] Container runtime docker initialized, version: 17.09.0-ce, apiVersion: 1.32.0 сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.919453 13875 server.go:1013] Started kubelet сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.922754 13875 server.go:133] Starting to listen on 0.0.0.0:10250 сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.923557 13875 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.923581 13875 status_manager.go:152] Starting to sync pod status with apiserver сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.923642 13875 kubelet.go:1804] Starting kubelet main sync loop. сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.923667 13875 kubelet.go:1821] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s] сеп 26 00:48:01 luboitvbox kubelet[13875]: E0926 00:48:01.923916 13875 kubelet.go:1287] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.924095 13875 volume_manager.go:248] Starting Kubelet Volume Manager сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.925150 13875 server.go:318] Adding debug handlers to kubelet server. сеп 26 00:48:01 luboitvbox kubelet[13875]: I0926 00:48:01.926397 13875 desired_state_of_world_populator.go:130] Desired state populator starts to run сеп 26 00:48:01 luboitvbox kubelet[13875]: W0926 00:48:01.937413 13875 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d сеп 26 00:48:01 luboitvbox kubelet[13875]: E0926 00:48:01.937676 13875 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.023812 13875 kubelet.go:1821] skipping pod synchronization - [container runtime is down] сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.024365 13875 kuberuntime_manager.go:910] updating runtime config through cri with podcidr 192.168.0.0/24 сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.024802 13875 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.025523 13875 docker_service.go:345] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:192.168.0.0/24,},} сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.025691 13875 kubelet_network.go:75] Setting Pod CIDR: -> 192.168.0.0/24 сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.027468 13875 kubelet_node_status.go:70] Attempting to register node luboitvbox сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.039409 13875 kubelet_node_status.go:112] Node luboitvbox was previously registered сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.039432 13875 kubelet_node_status.go:73] Successfully registered node luboitvbox сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.225075 13875 kubelet.go:1821] skipping pod synchronization - [container runtime is down] сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.316517 13875 cpu_manager.go:155] [cpumanager] starting with none policy сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.316539 13875 cpu_manager.go:156] [cpumanager] reconciling every 10s сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.316548 13875 policy_none.go:42] [cpumanager] none policy: Start сеп 26 00:48:02 luboitvbox kubelet[13875]: W0926 00:48:02.318552 13875 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d сеп 26 00:48:02 luboitvbox kubelet[13875]: E0926 00:48:02.319134 13875 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.636133 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ab7c73ac-c10c-11e8-a09e-080027a4c94f-kube-proxy") pod "kube-proxy-wj5z2" (UID: "ab7c73ac-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.636730 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/ab7c73ac-c10c-11e8-a09e-080027a4c94f-xtables-lock") pod "kube-proxy-wj5z2" (UID: "ab7c73ac-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.637083 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/ab7c73ac-c10c-11e8-a09e-080027a4c94f-lib-modules") pod "kube-proxy-wj5z2" (UID: "ab7c73ac-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.637545 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-4cgch" (UniqueName: "kubernetes.io/secret/ab7c73ac-c10c-11e8-a09e-080027a4c94f-kube-proxy-token-4cgch") pod "kube-proxy-wj5z2" (UID: "ab7c73ac-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741124 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/dc3420acd7ef5015a7395f4bbd2038aa-usr-local-share-ca-certificates") pod "kube-apiserver-luboitvbox" (UID: "dc3420acd7ef5015a7395f4bbd2038aa") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741202 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-z9nfg" (UniqueName: "kubernetes.io/secret/ab7c859b-c10c-11e8-a09e-080027a4c94f-calico-node-token-z9nfg") pod "calico-node-wvc7w" (UID: "ab7c859b-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741273 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/95124b90405de89db08ea74b39704039-etcd-certs") pod "etcd-luboitvbox" (UID: "95124b90405de89db08ea74b39704039") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741453 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/dc3420acd7ef5015a7395f4bbd2038aa-etc-ca-certificates") pod "kube-apiserver-luboitvbox" (UID: "dc3420acd7ef5015a7395f4bbd2038aa") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741510 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/3f8b0b3ceb127aed8767a6f3f6eb06e2-usr-share-ca-certificates") pod "kube-controller-manager-luboitvbox" (UID: "3f8b0b3ceb127aed8767a6f3f6eb06e2") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741553 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/3f8b0b3ceb127aed8767a6f3f6eb06e2-etc-pki") pod "kube-controller-manager-luboitvbox" (UID: "3f8b0b3ceb127aed8767a6f3f6eb06e2") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741589 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/dc3420acd7ef5015a7395f4bbd2038aa-etc-pki") pod "kube-apiserver-luboitvbox" (UID: "dc3420acd7ef5015a7395f4bbd2038aa") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741625 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/ab7c859b-c10c-11e8-a09e-080027a4c94f-cni-net-dir") pod "calico-node-wvc7w" (UID: "ab7c859b-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741668 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/3f8b0b3ceb127aed8767a6f3f6eb06e2-usr-local-share-ca-certificates") pod "kube-controller-manager-luboitvbox" (UID: "3f8b0b3ceb127aed8767a6f3f6eb06e2") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741707 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/dc3420acd7ef5015a7395f4bbd2038aa-ca-certs") pod "kube-apiserver-luboitvbox" (UID: "dc3420acd7ef5015a7395f4bbd2038aa") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741743 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/ab7c859b-c10c-11e8-a09e-080027a4c94f-lib-modules") pod "calico-node-wvc7w" (UID: "ab7c859b-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741782 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/ab7c859b-c10c-11e8-a09e-080027a4c94f-cni-bin-dir") pod "calico-node-wvc7w" (UID: "ab7c859b-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741822 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/3f8b0b3ceb127aed8767a6f3f6eb06e2-k8s-certs") pod "kube-controller-manager-luboitvbox" (UID: "3f8b0b3ceb127aed8767a6f3f6eb06e2") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741860 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/ab7c859b-c10c-11e8-a09e-080027a4c94f-var-lib-calico") pod "calico-node-wvc7w" (UID: "ab7c859b-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741901 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/dc3420acd7ef5015a7395f4bbd2038aa-k8s-certs") pod "kube-apiserver-luboitvbox" (UID: "dc3420acd7ef5015a7395f4bbd2038aa") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741939 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/3f8b0b3ceb127aed8767a6f3f6eb06e2-ca-certs") pod "kube-controller-manager-luboitvbox" (UID: "3f8b0b3ceb127aed8767a6f3f6eb06e2") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.741975 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/90525db8cbedba5b53c290a1512bd8ae-kubeconfig") pod "kube-scheduler-luboitvbox" (UID: "90525db8cbedba5b53c290a1512bd8ae") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.742009 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/95124b90405de89db08ea74b39704039-etcd-data") pod "etcd-luboitvbox" (UID: "95124b90405de89db08ea74b39704039") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.742136 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/ab7c859b-c10c-11e8-a09e-080027a4c94f-var-run-calico") pod "calico-node-wvc7w" (UID: "ab7c859b-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.742179 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/3f8b0b3ceb127aed8767a6f3f6eb06e2-etc-ca-certificates") pod "kube-controller-manager-luboitvbox" (UID: "3f8b0b3ceb127aed8767a6f3f6eb06e2") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.742217 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/3f8b0b3ceb127aed8767a6f3f6eb06e2-flexvolume-dir") pod "kube-controller-manager-luboitvbox" (UID: "3f8b0b3ceb127aed8767a6f3f6eb06e2") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.742398 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/dc3420acd7ef5015a7395f4bbd2038aa-usr-share-ca-certificates") pod "kube-apiserver-luboitvbox" (UID: "dc3420acd7ef5015a7395f4bbd2038aa") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.742457 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/3f8b0b3ceb127aed8767a6f3f6eb06e2-kubeconfig") pod "kube-controller-manager-luboitvbox" (UID: "3f8b0b3ceb127aed8767a6f3f6eb06e2") сеп 26 00:48:02 luboitvbox kubelet[13875]: I0926 00:48:02.742495 13875 reconciler.go:154] Reconciler: start to sync state сеп 26 00:48:09 luboitvbox kubelet[13875]: W0926 00:48:09.567929 13875 prober.go:103] No ref for container "docker://9450637eefec99521b5017dc14558cfc0102e25b67d192914dfa5f3df998fc69" (calico-node-wvc7w_kube-system(ab7c859b-c10c-11e8-a09e-080027a4c94f):calico-node) сеп 26 00:48:12 luboitvbox kubelet[13875]: I0926 00:48:12.229843 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ab934deb-c10c-11e8-a09e-080027a4c94f-config-volume") pod "coredns-576cbf47c7-lpmtc" (UID: "ab934deb-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:12 luboitvbox kubelet[13875]: I0926 00:48:12.230185 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ab946e46-c10c-11e8-a09e-080027a4c94f-config-volume") pod "coredns-576cbf47c7-b7cw8" (UID: "ab946e46-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:12 luboitvbox kubelet[13875]: I0926 00:48:12.230504 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-zf6k4" (UniqueName: "kubernetes.io/secret/ab934deb-c10c-11e8-a09e-080027a4c94f-coredns-token-zf6k4") pod "coredns-576cbf47c7-lpmtc" (UID: "ab934deb-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:12 luboitvbox kubelet[13875]: I0926 00:48:12.230826 13875 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-zf6k4" (UniqueName: "kubernetes.io/secret/ab946e46-c10c-11e8-a09e-080027a4c94f-coredns-token-zf6k4") pod "coredns-576cbf47c7-b7cw8" (UID: "ab946e46-c10c-11e8-a09e-080027a4c94f") сеп 26 00:48:12 luboitvbox kubelet[13875]: W0926 00:48:12.379444 13875 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r73f15d8c374c4d46bb5be18c352c3916.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r73f15d8c374c4d46bb5be18c352c3916.scope: no such file or directory сеп 26 00:48:12 luboitvbox kubelet[13875]: W0926 00:48:12.379544 13875 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r73f15d8c374c4d46bb5be18c352c3916.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r73f15d8c374c4d46bb5be18c352c3916.scope: no such file or directory сеп 26 00:48:12 luboitvbox kubelet[13875]: W0926 00:48:12.379574 13875 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r73f15d8c374c4d46bb5be18c352c3916.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r73f15d8c374c4d46bb5be18c352c3916.scope: no such file or directory сеп 26 00:48:12 luboitvbox kubelet[13875]: W0926 00:48:12.379597 13875 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r73f15d8c374c4d46bb5be18c352c3916.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r73f15d8c374c4d46bb5be18c352c3916.scope: no such file or directory сеп 26 00:48:12 luboitvbox kubelet[13875]: 2018-09-26 00:48:12.864 [INFO][14375] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"luboitvbox", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"coredns-576cbf47c7-lpmtc", ContainerID:"6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a"}} сеп 26 00:48:12 luboitvbox kubelet[13875]: 2018-09-26 00:48:12.865 [INFO][14375] calico.go 76: Loaded CNI NetConf NetConfg=types.NetConf{CNIVersion:"0.3.0", Name:"k8s-pod-network", Type:"calico", IPAM:struct { Name string; Type string "json:\"type\""; Subnet string "json:\"subnet\""; AssignIpv4 *string "json:\"assign_ipv4\""; AssignIpv6 *string "json:\"assign_ipv6\""; IPv4Pools []string "json:\"ipv4_pools,omitempty\""; IPv6Pools []string "json:\"ipv6_pools,omitempty\"" }{Name:"", Type:"host-local", Subnet:"usePodCidr", AssignIpv4:(*string)(nil), AssignIpv6:(*string)(nil), IPv4Pools:[]string(nil), IPv6Pools:[]string(nil)}, Args:types.Args{Mesos:types.Mesos{NetworkInfo:types.NetworkInfo{Name:"", Labels:struct { Labels []struct { Key string "json:\"key\""; Value string "json:\"value\"" } "json:\"labels,omitempty\"" }{Labels:[]struct { Key string "json:\"key\""; Value string "json:\"value\"" }(nil)}}}}, MTU:1500, Nodename:"luboitvbox", NodenameFileOptional:false, DatastoreType:"kubernetes", EtcdEndpoints:"", LogLevel:"info", Policy:types.Policy{PolicyType:"k8s", K8sAPIRoot:"", K8sAuthToken:"", K8sClientCertificate:"", K8sClientKey:"", K8sCertificateAuthority:""}, Kubernetes:types.Kubernetes{K8sAPIRoot:"", Kubeconfig:"/etc/cni/net.d/calico-kubeconfig", NodeName:""}, EtcdScheme:"", EtcdKeyFile:"", EtcdCertFile:"", EtcdCaCertFile:"", ContainerSettings:types.ContainerSettings{AllowIPForwarding:false}, EtcdAuthority:"", Hostname:""} сеп 26 00:48:12 luboitvbox kubelet[13875]: 2018-09-26 00:48:12.865 [INFO][14375] utils.go 379: Configured environment: [CNI_COMMAND=ADD CNI_CONTAINERID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_NETNS=/proc/14264/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-lpmtc;K8S_POD_INFRA_CONTAINER_ID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:12 luboitvbox kubelet[13875]: 2018-09-26 00:48:12.878 [INFO][14375] customresource.go 217: Error getting resource Key=ClusterInformation(default) Name="default" Resource="ClusterInformations" Revision="" error=clusterinformations.crd.projectcalico.org "default" not found сеп 26 00:48:12 luboitvbox kubelet[13875]: E0926 00:48:12.880658 13875 cni.go:310] Error adding network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:12 luboitvbox kubelet[13875]: E0926 00:48:12.880681 13875 cni.go:278] Error while adding to cni network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:12 luboitvbox kubelet[13875]: 2018-09-26 00:48:12.891 [INFO][14391] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"luboitvbox", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"coredns-576cbf47c7-b7cw8", ContainerID:"6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c"}} сеп 26 00:48:12 luboitvbox kubelet[13875]: 2018-09-26 00:48:12.891 [INFO][14391] calico.go 76: Loaded CNI NetConf NetConfg=types.NetConf{CNIVersion:"0.3.0", Name:"k8s-pod-network", Type:"calico", IPAM:struct { Name string; Type string "json:\"type\""; Subnet string "json:\"subnet\""; AssignIpv4 *string "json:\"assign_ipv4\""; AssignIpv6 *string "json:\"assign_ipv6\""; IPv4Pools []string "json:\"ipv4_pools,omitempty\""; IPv6Pools []string "json:\"ipv6_pools,omitempty\"" }{Name:"", Type:"host-local", Subnet:"usePodCidr", AssignIpv4:(*string)(nil), AssignIpv6:(*string)(nil), IPv4Pools:[]string(nil), IPv6Pools:[]string(nil)}, Args:types.Args{Mesos:types.Mesos{NetworkInfo:types.NetworkInfo{Name:"", Labels:struct { Labels []struct { Key string "json:\"key\""; Value string "json:\"value\"" } "json:\"labels,omitempty\"" }{Labels:[]struct { Key string "json:\"key\""; Value string "json:\"value\"" }(nil)}}}}, MTU:1500, Nodename:"luboitvbox", NodenameFileOptional:false, DatastoreType:"kubernetes", EtcdEndpoints:"", LogLevel:"info", Policy:types.Policy{PolicyType:"k8s", K8sAPIRoot:"", K8sAuthToken:"", K8sClientCertificate:"", K8sClientKey:"", K8sCertificateAuthority:""}, Kubernetes:types.Kubernetes{K8sAPIRoot:"", Kubeconfig:"/etc/cni/net.d/calico-kubeconfig", NodeName:""}, EtcdScheme:"", EtcdKeyFile:"", EtcdCertFile:"", EtcdCaCertFile:"", ContainerSettings:types.ContainerSettings{AllowIPForwarding:false}, EtcdAuthority:"", Hostname:""} сеп 26 00:48:12 luboitvbox kubelet[13875]: 2018-09-26 00:48:12.891 [INFO][14391] utils.go 379: Configured environment: [CNI_COMMAND=ADD CNI_CONTAINERID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_NETNS=/proc/14304/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-b7cw8;K8S_POD_INFRA_CONTAINER_ID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:12 luboitvbox kubelet[13875]: 2018-09-26 00:48:12.905 [INFO][14391] customresource.go 217: Error getting resource Key=ClusterInformation(default) Name="default" Resource="ClusterInformations" Revision="" error=clusterinformations.crd.projectcalico.org "default" not found сеп 26 00:48:12 luboitvbox kubelet[13875]: E0926 00:48:12.906845 13875 cni.go:310] Error adding network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:12 luboitvbox kubelet[13875]: E0926 00:48:12.906874 13875 cni.go:278] Error while adding to cni network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:13 luboitvbox kubelet[13875]: 2018-09-26 00:48:13.026 [INFO][14438] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_NETNS=/proc/14264/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-lpmtc;K8S_POD_INFRA_CONTAINER_ID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:13 luboitvbox kubelet[13875]: 2018-09-26 00:48:13.035 [INFO][14438] customresource.go 217: Error getting resource Key=ClusterInformation(default) Name="default" Resource="ClusterInformations" Revision="" error=clusterinformations.crd.projectcalico.org "default" not found сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.037157 13875 cni.go:330] Error deleting network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.196723 13875 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" network for pod "coredns-576cbf47c7-lpmtc": NetworkPlugin cni failed to set up pod "coredns-576cbf47c7-lpmtc_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default), failed to clean up sandbox container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" network for pod "coredns-576cbf47c7-lpmtc": NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-lpmtc_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)] сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.196784 13875 kuberuntime_sandbox.go:65] CreatePodSandbox for pod "coredns-576cbf47c7-lpmtc_kube-system(ab934deb-c10c-11e8-a09e-080027a4c94f)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" network for pod "coredns-576cbf47c7-lpmtc": NetworkPlugin cni failed to set up pod "coredns-576cbf47c7-lpmtc_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default), failed to clean up sandbox container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" network for pod "coredns-576cbf47c7-lpmtc": NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-lpmtc_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)] сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.196799 13875 kuberuntime_manager.go:657] createPodSandbox for pod "coredns-576cbf47c7-lpmtc_kube-system(ab934deb-c10c-11e8-a09e-080027a4c94f)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" network for pod "coredns-576cbf47c7-lpmtc": NetworkPlugin cni failed to set up pod "coredns-576cbf47c7-lpmtc_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default), failed to clean up sandbox container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" network for pod "coredns-576cbf47c7-lpmtc": NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-lpmtc_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)] сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.196866 13875 pod_workers.go:186] Error syncing pod ab934deb-c10c-11e8-a09e-080027a4c94f ("coredns-576cbf47c7-lpmtc_kube-system(ab934deb-c10c-11e8-a09e-080027a4c94f)"), skipping: failed to "CreatePodSandbox" for "coredns-576cbf47c7-lpmtc_kube-system(ab934deb-c10c-11e8-a09e-080027a4c94f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-576cbf47c7-lpmtc_kube-system(ab934deb-c10c-11e8-a09e-080027a4c94f)\" failed: rpc error: code = Unknown desc = [failed to set up sandbox container \"6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a\" network for pod \"coredns-576cbf47c7-lpmtc\": NetworkPlugin cni failed to set up pod \"coredns-576cbf47c7-lpmtc_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default), failed to clean up sandbox container \"6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a\" network for pod \"coredns-576cbf47c7-lpmtc\": NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-lpmtc_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)]" сеп 26 00:48:13 luboitvbox kubelet[13875]: W0926 00:48:13.312440 13875 docker_sandbox.go:375] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "coredns-576cbf47c7-lpmtc_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" сеп 26 00:48:13 luboitvbox kubelet[13875]: W0926 00:48:13.314495 13875 pod_container_deletor.go:75] Container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" not found in pod's containers сеп 26 00:48:13 luboitvbox kubelet[13875]: W0926 00:48:13.318100 13875 cni.go:293] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" сеп 26 00:48:13 luboitvbox kubelet[13875]: 2018-09-26 00:48:13.453 [INFO][14490] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-lpmtc;K8S_POD_INFRA_CONTAINER_ID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:13 luboitvbox kubelet[13875]: 2018-09-26 00:48:13.464 [INFO][14490] customresource.go 217: Error getting resource Key=ClusterInformation(default) Name="default" Resource="ClusterInformations" Revision="" error=clusterinformations.crd.projectcalico.org "default" not found сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.465722 13875 cni.go:330] Error deleting network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.466405 13875 remote_runtime.go:119] StopPodSandbox "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-lpmtc_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.466870 13875 kuberuntime_manager.go:810] Failed to stop sandbox {"docker" "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a"} сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.467202 13875 kuberuntime_manager.go:605] killPodWithSyncResult failed: failed to "KillPodSandbox" for "ab934deb-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-lpmtc_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:13 luboitvbox kubelet[13875]: E0926 00:48:13.467525 13875 pod_workers.go:186] Error syncing pod ab934deb-c10c-11e8-a09e-080027a4c94f ("coredns-576cbf47c7-lpmtc_kube-system(ab934deb-c10c-11e8-a09e-080027a4c94f)"), skipping: failed to "KillPodSandbox" for "ab934deb-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-lpmtc_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:14 luboitvbox kubelet[13875]: 2018-09-26 00:48:14.083 [INFO][14507] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_NETNS=/proc/14304/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-b7cw8;K8S_POD_INFRA_CONTAINER_ID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:14 luboitvbox kubelet[13875]: 2018-09-26 00:48:14.093 [INFO][14507] customresource.go 217: Error getting resource Key=ClusterInformation(default) Name="default" Resource="ClusterInformations" Revision="" error=clusterinformations.crd.projectcalico.org "default" not found сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.094985 13875 cni.go:330] Error deleting network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.248764 13875 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" network for pod "coredns-576cbf47c7-b7cw8": NetworkPlugin cni failed to set up pod "coredns-576cbf47c7-b7cw8_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default), failed to clean up sandbox container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" network for pod "coredns-576cbf47c7-b7cw8": NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-b7cw8_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)] сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.248822 13875 kuberuntime_sandbox.go:65] CreatePodSandbox for pod "coredns-576cbf47c7-b7cw8_kube-system(ab946e46-c10c-11e8-a09e-080027a4c94f)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" network for pod "coredns-576cbf47c7-b7cw8": NetworkPlugin cni failed to set up pod "coredns-576cbf47c7-b7cw8_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default), failed to clean up sandbox container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" network for pod "coredns-576cbf47c7-b7cw8": NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-b7cw8_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)] сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.248835 13875 kuberuntime_manager.go:657] createPodSandbox for pod "coredns-576cbf47c7-b7cw8_kube-system(ab946e46-c10c-11e8-a09e-080027a4c94f)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" network for pod "coredns-576cbf47c7-b7cw8": NetworkPlugin cni failed to set up pod "coredns-576cbf47c7-b7cw8_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default), failed to clean up sandbox container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" network for pod "coredns-576cbf47c7-b7cw8": NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-b7cw8_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)] сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.248924 13875 pod_workers.go:186] Error syncing pod ab946e46-c10c-11e8-a09e-080027a4c94f ("coredns-576cbf47c7-b7cw8_kube-system(ab946e46-c10c-11e8-a09e-080027a4c94f)"), skipping: failed to "CreatePodSandbox" for "coredns-576cbf47c7-b7cw8_kube-system(ab946e46-c10c-11e8-a09e-080027a4c94f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-576cbf47c7-b7cw8_kube-system(ab946e46-c10c-11e8-a09e-080027a4c94f)\" failed: rpc error: code = Unknown desc = [failed to set up sandbox container \"6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c\" network for pod \"coredns-576cbf47c7-b7cw8\": NetworkPlugin cni failed to set up pod \"coredns-576cbf47c7-b7cw8_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default), failed to clean up sandbox container \"6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c\" network for pod \"coredns-576cbf47c7-b7cw8\": NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-b7cw8_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)]" сеп 26 00:48:14 luboitvbox kubelet[13875]: W0926 00:48:14.362134 13875 docker_sandbox.go:375] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "coredns-576cbf47c7-b7cw8_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" сеп 26 00:48:14 luboitvbox kubelet[13875]: W0926 00:48:14.365346 13875 pod_container_deletor.go:75] Container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" not found in pod's containers сеп 26 00:48:14 luboitvbox kubelet[13875]: W0926 00:48:14.370451 13875 cni.go:293] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" сеп 26 00:48:14 luboitvbox kubelet[13875]: 2018-09-26 00:48:14.504 [INFO][14556] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-b7cw8;K8S_POD_INFRA_CONTAINER_ID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:14 luboitvbox kubelet[13875]: 2018-09-26 00:48:14.519 [INFO][14556] customresource.go 217: Error getting resource Key=ClusterInformation(default) Name="default" Resource="ClusterInformations" Revision="" error=clusterinformations.crd.projectcalico.org "default" not found сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.521794 13875 cni.go:330] Error deleting network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.523232 13875 remote_runtime.go:119] StopPodSandbox "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-b7cw8_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.523263 13875 kuberuntime_manager.go:810] Failed to stop sandbox {"docker" "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c"} сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.523346 13875 kuberuntime_manager.go:605] killPodWithSyncResult failed: failed to "KillPodSandbox" for "ab946e46-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-b7cw8_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:14 luboitvbox kubelet[13875]: E0926 00:48:14.523368 13875 pod_workers.go:186] Error syncing pod ab946e46-c10c-11e8-a09e-080027a4c94f ("coredns-576cbf47c7-b7cw8_kube-system(ab946e46-c10c-11e8-a09e-080027a4c94f)"), skipping: failed to "KillPodSandbox" for "ab946e46-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-b7cw8_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:15 luboitvbox kubelet[13875]: W0926 00:48:15.401519 13875 cni.go:293] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" сеп 26 00:48:15 luboitvbox kubelet[13875]: 2018-09-26 00:48:15.555 [INFO][14582] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-b7cw8;K8S_POD_INFRA_CONTAINER_ID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:15 luboitvbox kubelet[13875]: 2018-09-26 00:48:15.565 [INFO][14582] customresource.go 217: Error getting resource Key=ClusterInformation(default) Name="default" Resource="ClusterInformations" Revision="" error=clusterinformations.crd.projectcalico.org "default" not found сеп 26 00:48:15 luboitvbox kubelet[13875]: E0926 00:48:15.567115 13875 cni.go:330] Error deleting network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:15 luboitvbox kubelet[13875]: E0926 00:48:15.567823 13875 remote_runtime.go:119] StopPodSandbox "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-b7cw8_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:15 luboitvbox kubelet[13875]: E0926 00:48:15.567869 13875 kuberuntime_manager.go:810] Failed to stop sandbox {"docker" "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c"} сеп 26 00:48:15 luboitvbox kubelet[13875]: E0926 00:48:15.567925 13875 kuberuntime_manager.go:605] killPodWithSyncResult failed: failed to "KillPodSandbox" for "ab946e46-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-b7cw8_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:15 luboitvbox kubelet[13875]: E0926 00:48:15.567945 13875 pod_workers.go:186] Error syncing pod ab946e46-c10c-11e8-a09e-080027a4c94f ("coredns-576cbf47c7-b7cw8_kube-system(ab946e46-c10c-11e8-a09e-080027a4c94f)"), skipping: failed to "KillPodSandbox" for "ab946e46-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-b7cw8_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:18 luboitvbox kubelet[13875]: W0926 00:48:18.405938 13875 prober.go:103] No ref for container "docker://9450637eefec99521b5017dc14558cfc0102e25b67d192914dfa5f3df998fc69" (calico-node-wvc7w_kube-system(ab7c859b-c10c-11e8-a09e-080027a4c94f):calico-node) сеп 26 00:48:19 luboitvbox kubelet[13875]: W0926 00:48:19.567892 13875 prober.go:103] No ref for container "docker://9450637eefec99521b5017dc14558cfc0102e25b67d192914dfa5f3df998fc69" (calico-node-wvc7w_kube-system(ab7c859b-c10c-11e8-a09e-080027a4c94f):calico-node) сеп 26 00:48:27 luboitvbox kubelet[13875]: W0926 00:48:27.935150 13875 cni.go:293] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" сеп 26 00:48:27 luboitvbox kubelet[13875]: W0926 00:48:27.936686 13875 cni.go:293] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" сеп 26 00:48:28 luboitvbox kubelet[13875]: 2018-09-26 00:48:28.088 [INFO][14691] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-lpmtc;K8S_POD_INFRA_CONTAINER_ID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:28 luboitvbox kubelet[13875]: 2018-09-26 00:48:28.101 [INFO][14691] customresource.go 217: Error getting resource Key=ClusterInformation(default) Name="default" Resource="ClusterInformations" Revision="" error=clusterinformations.crd.projectcalico.org "default" not found сеп 26 00:48:28 luboitvbox kubelet[13875]: E0926 00:48:28.102861 13875 cni.go:330] Error deleting network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:28 luboitvbox kubelet[13875]: E0926 00:48:28.103545 13875 remote_runtime.go:119] StopPodSandbox "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-lpmtc_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:28 luboitvbox kubelet[13875]: E0926 00:48:28.103574 13875 kuberuntime_manager.go:810] Failed to stop sandbox {"docker" "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a"} сеп 26 00:48:28 luboitvbox kubelet[13875]: E0926 00:48:28.103609 13875 kuberuntime_manager.go:605] killPodWithSyncResult failed: failed to "KillPodSandbox" for "ab934deb-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-lpmtc_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:28 luboitvbox kubelet[13875]: E0926 00:48:28.103648 13875 pod_workers.go:186] Error syncing pod ab934deb-c10c-11e8-a09e-080027a4c94f ("coredns-576cbf47c7-lpmtc_kube-system(ab934deb-c10c-11e8-a09e-080027a4c94f)"), skipping: failed to "KillPodSandbox" for "ab934deb-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-lpmtc_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:28 luboitvbox kubelet[13875]: W0926 00:48:28.406512 13875 prober.go:103] No ref for container "docker://9450637eefec99521b5017dc14558cfc0102e25b67d192914dfa5f3df998fc69" (calico-node-wvc7w_kube-system(ab7c859b-c10c-11e8-a09e-080027a4c94f):calico-node) сеп 26 00:48:29 luboitvbox kubelet[13875]: 2018-09-26 00:48:29.134 [INFO][14704] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-b7cw8;K8S_POD_INFRA_CONTAINER_ID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:29 luboitvbox kubelet[13875]: 2018-09-26 00:48:29.144 [INFO][14704] customresource.go 217: Error getting resource Key=ClusterInformation(default) Name="default" Resource="ClusterInformations" Revision="" error=clusterinformations.crd.projectcalico.org "default" not found сеп 26 00:48:29 luboitvbox kubelet[13875]: E0926 00:48:29.146635 13875 cni.go:330] Error deleting network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:29 luboitvbox kubelet[13875]: E0926 00:48:29.147405 13875 remote_runtime.go:119] StopPodSandbox "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "coredns-576cbf47c7-b7cw8_kube-system" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default) сеп 26 00:48:29 luboitvbox kubelet[13875]: E0926 00:48:29.147433 13875 kuberuntime_manager.go:810] Failed to stop sandbox {"docker" "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c"} сеп 26 00:48:29 luboitvbox kubelet[13875]: E0926 00:48:29.147469 13875 kuberuntime_manager.go:605] killPodWithSyncResult failed: failed to "KillPodSandbox" for "ab946e46-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-b7cw8_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:29 luboitvbox kubelet[13875]: E0926 00:48:29.147547 13875 pod_workers.go:186] Error syncing pod ab946e46-c10c-11e8-a09e-080027a4c94f ("coredns-576cbf47c7-b7cw8_kube-system(ab946e46-c10c-11e8-a09e-080027a4c94f)"), skipping: failed to "KillPodSandbox" for "ab946e46-c10c-11e8-a09e-080027a4c94f" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-576cbf47c7-b7cw8_kube-system\" network: error getting ClusterInformation: resource does not exist: ClusterInformation(default)" сеп 26 00:48:29 luboitvbox kubelet[13875]: W0926 00:48:29.567646 13875 prober.go:103] No ref for container "docker://9450637eefec99521b5017dc14558cfc0102e25b67d192914dfa5f3df998fc69" (calico-node-wvc7w_kube-system(ab7c859b-c10c-11e8-a09e-080027a4c94f):calico-node) сеп 26 00:48:39 luboitvbox kubelet[13875]: W0926 00:48:39.928568 13875 cni.go:293] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.070 [INFO][14915] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-lpmtc;K8S_POD_INFRA_CONTAINER_ID=6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.086 [INFO][14915] calico.go 431: Extracted identifiers ContainerID="6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" Node="luboitvbox" Orchestrator="k8s" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0" сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.135 [WARNING][14915] workloadendpoint.go 72: Operation Delete is not supported on WorkloadEndpoint type сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.135 [INFO][14915] k8s.go 347: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"ab934deb-c10c-11e8-a09e-080027a4c94f", ResourceVersion:"477", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63673508879, loc:(*time.Location)(0x1ec6320)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"576cbf47c7", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"luboitvbox", ContainerID:"", Pod:"coredns-576cbf47c7-lpmtc", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system"}, InterfaceName:"calic64d0c27a36", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.135 [INFO][14915] k8s.go 354: Releasing IP address(es) ContainerID="6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" сеп 26 00:48:40 luboitvbox kubelet[13875]: Calico CNI releasing IP address сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.136 [INFO][14915] utils.go 149: Using a dummy podCidr to release the IP ContainerID="6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" podCidr="0.0.0.0/0" сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.154 [INFO][14915] k8s.go 358: Cleaning up netns ContainerID="6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.154 [INFO][14915] k8s.go 370: Teardown processing complete. ContainerID="6545810bb4051dcd732d735fa7acc2070c5eb1a81e65f94a475b851ce0199a0a" сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.560 [INFO][14999] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"luboitvbox", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"coredns-576cbf47c7-lpmtc", ContainerID:"cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5"}} сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.560 [INFO][14999] calico.go 76: Loaded CNI NetConf NetConfg=types.NetConf{CNIVersion:"0.3.0", Name:"k8s-pod-network", Type:"calico", IPAM:struct { Name string; Type string "json:\"type\""; Subnet string "json:\"subnet\""; AssignIpv4 *string "json:\"assign_ipv4\""; AssignIpv6 *string "json:\"assign_ipv6\""; IPv4Pools []string "json:\"ipv4_pools,omitempty\""; IPv6Pools []string "json:\"ipv6_pools,omitempty\"" }{Name:"", Type:"host-local", Subnet:"usePodCidr", AssignIpv4:(*string)(nil), AssignIpv6:(*string)(nil), IPv4Pools:[]string(nil), IPv6Pools:[]string(nil)}, Args:types.Args{Mesos:types.Mesos{NetworkInfo:types.NetworkInfo{Name:"", Labels:struct { Labels []struct { Key string "json:\"key\""; Value string "json:\"value\"" } "json:\"labels,omitempty\"" }{Labels:[]struct { Key string "json:\"key\""; Value string "json:\"value\"" }(nil)}}}}, MTU:1500, Nodename:"luboitvbox", NodenameFileOptional:false, DatastoreType:"kubernetes", EtcdEndpoints:"", LogLevel:"info", Policy:types.Policy{PolicyType:"k8s", K8sAPIRoot:"", K8sAuthToken:"", K8sClientCertificate:"", K8sClientKey:"", K8sCertificateAuthority:""}, Kubernetes:types.Kubernetes{K8sAPIRoot:"", Kubeconfig:"/etc/cni/net.d/calico-kubeconfig", NodeName:""}, EtcdScheme:"", EtcdKeyFile:"", EtcdCertFile:"", EtcdCaCertFile:"", ContainerSettings:types.ContainerSettings{AllowIPForwarding:false}, EtcdAuthority:"", Hostname:""} сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.560 [INFO][14999] utils.go 379: Configured environment: [CNI_COMMAND=ADD CNI_CONTAINERID=cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5 CNI_NETNS=/proc/14947/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-lpmtc;K8S_POD_INFRA_CONTAINER_ID=cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.620 [INFO][14999] calico.go 167: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0 kube-system ab934deb-c10c-11e8-a09e-080027a4c94f 477 0 2018-09-26 00:47:59 +0300 EEST map[k8s-app:kube-dns pod-template-hash:576cbf47c7 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s] map[] [] nil [] } {k8s luboitvbox coredns-576cbf47c7-lpmtc eth0 [] [] [kns.kube-system] calic64d0c27a36 [{dns UDP 53} {dns-tcp TCP 53} {metrics TCP 9153}]}} ContainerID="cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5" Namespace="kube-system" Pod="coredns-576cbf47c7-lpmtc" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--lpmtc-" сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.620 [INFO][14999] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5" Namespace="kube-system" Pod="coredns-576cbf47c7-lpmtc" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0" сеп 26 00:48:40 luboitvbox kubelet[13875]: Calico CNI fetching podCidr from Kubernetes сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.635 [INFO][14999] k8s.go 83: Fetched podCidr ContainerID="cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5" Namespace="kube-system" Pod="coredns-576cbf47c7-lpmtc" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0" podCidr="192.168.0.0/24" сеп 26 00:48:40 luboitvbox kubelet[13875]: Calico CNI passing podCidr to host-local IPAM: 192.168.0.0/24 сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.637 [INFO][14999] k8s.go 660: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:coredns-576cbf47c7-lpmtc,GenerateName:coredns-576cbf47c7-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/coredns-576cbf47c7-lpmtc,UID:ab934deb-c10c-11e8-a09e-080027a4c94f,ResourceVersion:477,Generation:0,CreationTimestamp:2018-09-26 00:47:59 +0300 EEST,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{k8s-app: kube-dns,pod-template-hash: 576cbf47c7,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet coredns-576cbf47c7 ab8569c0-c10c-11e8-a09e-080027a4c94f 0xc42019cdf7 0xc42019cdf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{config-volume {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:coredns,},Items:[{Corefile Corefile }],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil}} {coredns-token-zf6k4 {nil nil nil nil nil &SecretVolumeSource{SecretName:coredns-token-zf6k4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{coredns k8s.gcr.io/coredns:1.2.2 [] [-conf /etc/coredns/Corefile] [{dns 0 53 UDP } {dns-tcp 0 53 TCP } {metrics 0 9153 TCP }] [] [] {map[memory:{{178257920 0} {} 170Mi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{73400320 0} {} 70Mi BinarySI}]} [{config-volume true /etc/coredns } {coredns-token-zf6k4 true /var/run/secrets/kubernetes.io/serviceaccount }] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil сеп 26 00:48:40 luboitvbox kubelet[13875]: ,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:coredns,DeprecatedServiceAccount:coredns,NodeName:luboitvbox,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{CriticalAddonsOnly Exists } {node-role.kubernetes.io/master NoSchedule } {node.kubernetes.io/not-ready Exists NoExecute 0xc42019d160} {node.kubernetes.io/unreachable Exists NoExecute 0xc42019d1b0}],HostAliases:[],PriorityClassName:,Priority:*0,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-26 00:48:12 +0300 EEST } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-09-26 00:48:12 +0300 EEST ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2018-09-26 00:48:12 +0300 EEST ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-26 00:48:12 +0300 EEST }],Message:,Reason:,HostIP:192.168.0.102,PodIP:,StartTime:2018-09-26 00:48:12 +0300 EEST,ContainerStatuses:[{coredns {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 k8s.gcr.io/coredns:1.2.2 }],QOSClass:Burstable,InitContainerStatuses:[],},} сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.639 [INFO][14999] k8s.go 267: Populated endpoint ContainerID="cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5" Namespace="kube-system" Pod="coredns-576cbf47c7-lpmtc" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"ab934deb-c10c-11e8-a09e-080027a4c94f", ResourceVersion:"477", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63673508879, loc:(*time.Location)(0x1ec6320)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "k8s-app":"kube-dns", "pod-template-hash":"576cbf47c7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"luboitvbox", ContainerID:"", Pod:"coredns-576cbf47c7-lpmtc", Endpoint:"eth0", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system"}, InterfaceName:"calic64d0c27a36", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} сеп 26 00:48:40 luboitvbox kubelet[13875]: Calico CNI using IPs: [192.168.0.4/32] сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.639 [INFO][14999] network.go 31: Setting the host side veth name to calic64d0c27a36 ContainerID="cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5" Namespace="kube-system" Pod="coredns-576cbf47c7-lpmtc" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0" сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.659 [INFO][14999] network.go 326: Disabling IPv4 forwarding ContainerID="cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5" Namespace="kube-system" Pod="coredns-576cbf47c7-lpmtc" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0" сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.696 [INFO][14999] k8s.go 294: Added Mac, interface name, and active container ID to endpoint ContainerID="cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5" Namespace="kube-system" Pod="coredns-576cbf47c7-lpmtc" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"ab934deb-c10c-11e8-a09e-080027a4c94f", ResourceVersion:"477", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63673508879, loc:(*time.Location)(0x1ec6320)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/orchestrator":"k8s", "k8s-app":"kube-dns", "pod-template-hash":"576cbf47c7", "projectcalico.org/namespace":"kube-system"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"luboitvbox", ContainerID:"cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5", Pod:"coredns-576cbf47c7-lpmtc", Endpoint:"eth0", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system"}, InterfaceName:"calic64d0c27a36", MAC:"e6:5d:fe:da:01:97", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} сеп 26 00:48:40 luboitvbox kubelet[13875]: 2018-09-26 00:48:40.729 [INFO][14999] k8s.go 302: Wrote updated endpoint to datastore ContainerID="cd7f58c5a0db834a561efa86fa2b82c1f9924a28c2b5f48aeba58606122066f5" Namespace="kube-system" Pod="coredns-576cbf47c7-lpmtc" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--lpmtc-eth0" сеп 26 00:48:40 luboitvbox kubelet[13875]: W0926 00:48:40.928201 13875 cni.go:293] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.096 [INFO][15096] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-b7cw8;K8S_POD_INFRA_CONTAINER_ID=6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.114 [INFO][15096] calico.go 431: Extracted identifiers ContainerID="6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" Node="luboitvbox" Orchestrator="k8s" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0" сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.188 [WARNING][15096] workloadendpoint.go 72: Operation Delete is not supported on WorkloadEndpoint type сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.189 [INFO][15096] k8s.go 347: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"ab946e46-c10c-11e8-a09e-080027a4c94f", ResourceVersion:"478", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63673508879, loc:(*time.Location)(0x1ec6320)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"576cbf47c7", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"luboitvbox", ContainerID:"", Pod:"coredns-576cbf47c7-b7cw8", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system"}, InterfaceName:"cali977d95954c9", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.189 [INFO][15096] k8s.go 354: Releasing IP address(es) ContainerID="6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" сеп 26 00:48:41 luboitvbox kubelet[13875]: Calico CNI releasing IP address сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.190 [INFO][15096] utils.go 149: Using a dummy podCidr to release the IP ContainerID="6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" podCidr="0.0.0.0/0" сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.192 [INFO][15096] k8s.go 358: Cleaning up netns ContainerID="6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.192 [INFO][15096] k8s.go 370: Teardown processing complete. ContainerID="6ae1edf62a9043a8e03b7b5cbf9cb99570c0d89b5d82e7d23464b38b2c0bb74c" сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.576 [INFO][15201] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"luboitvbox", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"coredns-576cbf47c7-b7cw8", ContainerID:"5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146"}} сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.576 [INFO][15201] calico.go 76: Loaded CNI NetConf NetConfg=types.NetConf{CNIVersion:"0.3.0", Name:"k8s-pod-network", Type:"calico", IPAM:struct { Name string; Type string "json:\"type\""; Subnet string "json:\"subnet\""; AssignIpv4 *string "json:\"assign_ipv4\""; AssignIpv6 *string "json:\"assign_ipv6\""; IPv4Pools []string "json:\"ipv4_pools,omitempty\""; IPv6Pools []string "json:\"ipv6_pools,omitempty\"" }{Name:"", Type:"host-local", Subnet:"usePodCidr", AssignIpv4:(*string)(nil), AssignIpv6:(*string)(nil), IPv4Pools:[]string(nil), IPv6Pools:[]string(nil)}, Args:types.Args{Mesos:types.Mesos{NetworkInfo:types.NetworkInfo{Name:"", Labels:struct { Labels []struct { Key string "json:\"key\""; Value string "json:\"value\"" } "json:\"labels,omitempty\"" }{Labels:[]struct { Key string "json:\"key\""; Value string "json:\"value\"" }(nil)}}}}, MTU:1500, Nodename:"luboitvbox", NodenameFileOptional:false, DatastoreType:"kubernetes", EtcdEndpoints:"", LogLevel:"info", Policy:types.Policy{PolicyType:"k8s", K8sAPIRoot:"", K8sAuthToken:"", K8sClientCertificate:"", K8sClientKey:"", K8sCertificateAuthority:""}, Kubernetes:types.Kubernetes{K8sAPIRoot:"", Kubeconfig:"/etc/cni/net.d/calico-kubeconfig", NodeName:""}, EtcdScheme:"", EtcdKeyFile:"", EtcdCertFile:"", EtcdCaCertFile:"", ContainerSettings:types.ContainerSettings{AllowIPForwarding:false}, EtcdAuthority:"", Hostname:""} сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.577 [INFO][15201] utils.go 379: Configured environment: [CNI_COMMAND=ADD CNI_CONTAINERID=5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146 CNI_NETNS=/proc/15144/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-576cbf47c7-b7cw8;K8S_POD_INFRA_CONTAINER_ID=5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US.UTF-8 LC_ADDRESS=bg_BG.UTF-8 LC_IDENTIFICATION=bg_BG.UTF-8 LC_MEASUREMENT=bg_BG.UTF-8 LC_MONETARY=bg_BG.UTF-8 LC_NAME=bg_BG.UTF-8 LC_NUMERIC=bg_BG.UTF-8 LC_PAPER=bg_BG.UTF-8 LC_TELEPHONE=bg_BG.UTF-8 LC_TIME=bg_BG.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin INVOCATION_ID=e8c98fe731db4e1aa7157bb112abece7 JOURNAL_STREAM=9:105263 KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --fail-swap-on=false KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf DATASTORE_TYPE=kubernetes KUBECONFIG=/etc/cni/net.d/calico-kubeconfig] сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.643 [INFO][15201] calico.go 167: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0 kube-system ab946e46-c10c-11e8-a09e-080027a4c94f 478 0 2018-09-26 00:47:59 +0300 EEST map[k8s-app:kube-dns pod-template-hash:576cbf47c7 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s] map[] [] nil [] } {k8s luboitvbox coredns-576cbf47c7-b7cw8 eth0 [] [] [kns.kube-system] cali977d95954c9 [{dns UDP 53} {dns-tcp TCP 53} {metrics TCP 9153}]}} ContainerID="5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146" Namespace="kube-system" Pod="coredns-576cbf47c7-b7cw8" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--b7cw8-" сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.643 [INFO][15201] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146" Namespace="kube-system" Pod="coredns-576cbf47c7-b7cw8" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0" сеп 26 00:48:41 luboitvbox kubelet[13875]: Calico CNI fetching podCidr from Kubernetes сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.658 [INFO][15201] k8s.go 83: Fetched podCidr ContainerID="5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146" Namespace="kube-system" Pod="coredns-576cbf47c7-b7cw8" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0" podCidr="192.168.0.0/24" сеп 26 00:48:41 luboitvbox kubelet[13875]: Calico CNI passing podCidr to host-local IPAM: 192.168.0.0/24 сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.661 [INFO][15201] k8s.go 660: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:coredns-576cbf47c7-b7cw8,GenerateName:coredns-576cbf47c7-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/coredns-576cbf47c7-b7cw8,UID:ab946e46-c10c-11e8-a09e-080027a4c94f,ResourceVersion:478,Generation:0,CreationTimestamp:2018-09-26 00:47:59 +0300 EEST,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{k8s-app: kube-dns,pod-template-hash: 576cbf47c7,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet coredns-576cbf47c7 ab8569c0-c10c-11e8-a09e-080027a4c94f 0xc420403c07 0xc420403c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{config-volume {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:coredns,},Items:[{Corefile Corefile }],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil}} {coredns-token-zf6k4 {nil nil nil nil nil &SecretVolumeSource{SecretName:coredns-token-zf6k4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{coredns k8s.gcr.io/coredns:1.2.2 [] [-conf /etc/coredns/Corefile] [{dns 0 53 UDP } {dns-tcp 0 53 TCP } {metrics 0 9153 TCP }] [] [] {map[memory:{{178257920 0} {} 170Mi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{73400320 0} {} 70Mi BinarySI}]} [{config-volume true /etc/coredns } {coredns-token-zf6k4 true /var/run/secrets/kubernetes.io/serviceaccount }] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil сеп 26 00:48:41 luboitvbox kubelet[13875]: ,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:coredns,DeprecatedServiceAccount:coredns,NodeName:luboitvbox,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{CriticalAddonsOnly Exists } {node-role.kubernetes.io/master NoSchedule } {node.kubernetes.io/not-ready Exists NoExecute 0xc420293680} {node.kubernetes.io/unreachable Exists NoExecute 0xc420293900}],HostAliases:[],PriorityClassName:,Priority:*0,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-26 00:48:12 +0300 EEST } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-09-26 00:48:12 +0300 EEST ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2018-09-26 00:48:12 +0300 EEST ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-26 00:48:12 +0300 EEST }],Message:,Reason:,HostIP:192.168.0.102,PodIP:,StartTime:2018-09-26 00:48:12 +0300 EEST,ContainerStatuses:[{coredns {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 k8s.gcr.io/coredns:1.2.2 }],QOSClass:Burstable,InitContainerStatuses:[],},} сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.664 [INFO][15201] k8s.go 267: Populated endpoint ContainerID="5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146" Namespace="kube-system" Pod="coredns-576cbf47c7-b7cw8" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"ab946e46-c10c-11e8-a09e-080027a4c94f", ResourceVersion:"478", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63673508879, loc:(*time.Location)(0x1ec6320)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"576cbf47c7", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"luboitvbox", ContainerID:"", Pod:"coredns-576cbf47c7-b7cw8", Endpoint:"eth0", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system"}, InterfaceName:"cali977d95954c9", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} сеп 26 00:48:41 luboitvbox kubelet[13875]: Calico CNI using IPs: [192.168.0.5/32] сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.665 [INFO][15201] network.go 31: Setting the host side veth name to cali977d95954c9 ContainerID="5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146" Namespace="kube-system" Pod="coredns-576cbf47c7-b7cw8" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0" сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.666 [INFO][15201] network.go 326: Disabling IPv4 forwarding ContainerID="5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146" Namespace="kube-system" Pod="coredns-576cbf47c7-b7cw8" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0" сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.684 [INFO][15201] k8s.go 294: Added Mac, interface name, and active container ID to endpoint ContainerID="5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146" Namespace="kube-system" Pod="coredns-576cbf47c7-b7cw8" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"ab946e46-c10c-11e8-a09e-080027a4c94f", ResourceVersion:"478", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63673508879, loc:(*time.Location)(0x1ec6320)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"576cbf47c7", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"luboitvbox", ContainerID:"5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146", Pod:"coredns-576cbf47c7-b7cw8", Endpoint:"eth0", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system"}, InterfaceName:"cali977d95954c9", MAC:"76:15:ee:38:4b:2d", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} сеп 26 00:48:41 luboitvbox kubelet[13875]: 2018-09-26 00:48:41.695 [INFO][15201] k8s.go 302: Wrote updated endpoint to datastore ContainerID="5fd4ffcfa89a287feb80d1f40f781442e6a1cc5078b016cde9ecf562759af146" Namespace="kube-system" Pod="coredns-576cbf47c7-b7cw8" WorkloadEndpoint="luboitvbox-k8s-coredns--576cbf47c7--b7cw8-eth0" ```

@@ -553,7 +550,7 @@ func (kl *Kubelet) makeEnvironmentVariables(pod *v1.Pod, container *v1.Container
// To avoid this users can: (1) wait between starting a service and starting; or (2) detect
// missing service env var and exit and be restarted; or (3) use DNS instead of env vars
// and keep trying to resolve the DNS name of the service (recommended).
serviceEnv, err := kl.getServiceEnvVarMap(pod.Namespace)
serviceEnv, err := kl.getServiceEnvVarMap(pod.Namespace, *pod.Spec.EnableServiceLinks)
Copy link
Member

@neolit123 neolit123 Sep 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bradhoekstra @thockin
the panic i'm seeing is because of this line.

is the field guarantied to be non-nil at this point?

Copy link
Member

@neolit123 neolit123 Sep 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this POC patch fixes the kubelet:

diff --git a/pkg/kubelet/kubelet_pods.go b/pkg/kubelet/kubelet_pods.go
index 1854554824..0add92af22 100644
--- a/pkg/kubelet/kubelet_pods.go
+++ b/pkg/kubelet/kubelet_pods.go
@@ -550,7 +550,11 @@ func (kl *Kubelet) makeEnvironmentVariables(pod *v1.Pod, container *v1.Container
        // To avoid this users can: (1) wait between starting a service and starting; or (2) detect
        // missing service env var and exit and be restarted; or (3) use DNS instead of env vars
        // and keep trying to resolve the DNS name of the service (recommended).
-       serviceEnv, err := kl.getServiceEnvVarMap(pod.Namespace, *pod.Spec.EnableServiceLinks)
+       enableServiceLinks := v1.DefaultEnableServiceLinks
+       if pod.Spec.EnableServiceLinks != nil {
+               enableServiceLinks = *pod.Spec.EnableServiceLinks
+       }
+       serviceEnv, err := kl.getServiceEnvVarMap(pod.Namespace, enableServiceLinks)
        if err != nil {
                return result, err
        }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was under the impression this change to defaults.go would guarantee that.

If not, I can copy the defaulting logic to this file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for debugging this @neolit123

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for debugging this @neolit123

@bradhoekstra
no problem!
let me know if you want me to submit the PR as i have it ready here.

I was under the impression this change to defaults.go would guarantee that.

i think it only does that when generating a default pod spec.

here is a similar case for Pod.Spec.SecurityContext:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/kuberuntime_sandbox.go#L159

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you have the PR ready go for it!

@neolit123
Copy link
Member

neolit123 commented Sep 25, 2018

EDIT: resolved as explained here:
#69061 (comment)

@ncdc
Copy link
Member

ncdc commented Oct 2, 2018

@bradhoekstra reading through this PR and what's currently in master, the service environment variables are still enabled by default, aren't they? Also, the annotation in this PR's description doesn't appear to be in the code base anywhere - I'm guessing it was replaced by the optional boolean field on the pod spec?

@bradhoekstra
Copy link
Contributor Author

@ncdc Yes, the service environment variables are still enabled by default. I've updated the PR description to reflect the what was actually implemented, which is the optional boolean in the PodSpec, not an annotation. Here is a link to the field:

EnableServiceLinks *bool `json:"enableServiceLinks,omitempty" protobuf:"varint,30,opt,name=enableServiceLinks"`

@ncdc
Copy link
Member

ncdc commented Oct 2, 2018

Thanks, just wanted to make sure what I read in the code was correct.

@krmayankk
Copy link

@bradhoekstra what milestone is this going in or is it already available in 1.12 ? It would be super helpful to tag these with the milestones . Also is there a doc update as well ?

@bradhoekstra
Copy link
Contributor Author

/milestone v1.13

@k8s-ci-robot
Copy link
Contributor

@bradhoekstra: You must be a member of the kubernetes/kubernetes-milestone-maintainers github team to set the milestone.

In response to this:

/milestone v1.13

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@bradhoekstra
Copy link
Contributor Author

@krmayankk I can't add milestones, but this will become available in 1.13. You can see it in the alpha1 release notes here: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md/#v1130-alpha1

@arianitu
Copy link

arianitu commented Feb 7, 2021

I'm not sure what we are doing wrong, we have set enableServiceLinks: false and I see it when I run kubectl get deployment -o yaml

But when I run kubectl exec and run something like php -i, I see Kubernetes environment variables still. How can I remove these? None of these pods are running in a default or kubernetes namespace.

I feel like this use to work, but now that we are on Kubernetes 1.17.x it's broken again. Was there something reverted/changed here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubectl area/kubelet cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/node Categorizes an issue or PR as relevant to SIG Node. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Service-environment variables should be optional
10 participants