-
Notifications
You must be signed in to change notification settings - Fork 776
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pods fail after upgrade to Ubuntu 20.04 #1775
Comments
Hi @antons1 MicroK8s has a service (called apiserver-kicker [1]) that keeps an eye on the interfaces on the system and triggers certificate refreshes and service restarts. This apiserver-kicker is used for example in the case where you are have MicroK8s on your laptop and you switch networks. I see that the apiserver-kicker always detects a change and keeps restarting the apiserver. Here is what you could try. First temporarily disable this service and see if the cluster gets into a healthy state:
If this is indeed the problem you can stop the the apiserver-kicker by editing
[1] https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/wrappers/apiservice-kicker |
Thank you, this solved the issue for me! |
microk8s inspect
: inspection-report-20201126_102305.tar.gzOS: Ubuntu 20.04.1 LTS
Mk8s version: 1.19/stable (But I have also tried to install 1.18 with the same results)
I have been running microk8s on my Ubuntu server for a few months, and everything has been working flawlessly. A few days ago, i upgraded the server from 18.04 to 20.04, and after that the cluster has been unable to start any pods. I don't really know which logs to check to find out more, but here are the symptoms:
A bunch of virtual nics are created and removed while the server is running. Right now I have tried running
microk8s reset
and then only enabled dns afterwards. So now, one card is continously created and removed. When i previously had several addons enabled, there would be several cards. The output ofip a
gives me my physical interfaces, and then the one being added by kubernetes, which is now number 33 and the number keeps rising.All pods are stuck on status
ContainerCreating
orUnknown
. Sometimes it will say statusRunning
, but the ready column will still say0/1
.Sometimes, executing
kubectl get all --all-namespaces
will give me the normal output:But other times, I just get a bunch of
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
, as in after runningkubectl ...
once, the terminal prints out the error message 5-15 times.kubectl describe [dns-pod] -n kube-system
gives a loop of these messages.If I add more addons, the same behaviour is exhibited by all the created pods, and that was also the case with my own deployments that were still there when I performed the upgrade.
I have tried to remove and reinstall microk8s (
sudo snap remove microk8s --purge
,sudo snap install microk8s --channel=1.19/stable --classic
), and before thatmicrok8s reset
.The text was updated successfully, but these errors were encountered: