Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Microk8s - Ingress problem (port 80 is already in use) #2986

Closed
mwilberg opened this issue Mar 16, 2022 · 6 comments
Closed

Microk8s - Ingress problem (port 80 is already in use) #2986

mwilberg opened this issue Mar 16, 2022 · 6 comments
Labels

Comments

@mwilberg
Copy link

We are running microk8s in an airgapped environment, so getting hold of logfiles and inspect tarballs is somewhat problematic, but this is the issue we are experiencing.

We had a working 3-node cluster (microk8s v1.20.6 on Ubuntu 20.04) running rook-ceph storage which got corrupted for some reason... but that is beside the point. After some consultation we decided we would just zap the disks associated with rook/ceph and rebuild the cluster.

Did a microk8s leave then microk8s reset on all the nodes - and then removed the old microk8s snap using snap remove microk8s

Checked /var/snap/ to see that the microk8s snap was gone - and it was

So far so good - as I said the system is airgapped so we had to import the snap/assert files manually and then installed the latest microk8s (v1.23.4) using snap ack microk8s_3021.assert and snap install microk8s_3021.snap --classic

The required base images have also been imported

microk8s ctr image ls |awk '{print $1}'|grep -v sha256
REF
docker.io/calico/cni:v3.19.1
docker.io/calico/kube-controllers:v3.17.3
docker.io/calico/node:v3.19.1
docker.io/calico/pod2daemon-flexvol:v3.19.1
docker.io/coredns/coredns:1.8.0
k8s.gcr.io/ingress-nginx/controller:v1.1.0
k8s.gcr.io/pause:3.1

Rebuilding the cluster with the 3 nodes seem to work just fine, and enabling rbac/dns with microk8s enable rbac dns also works without a hitch. The problems start when I try to add the ingress using microk8s enable ingress

The pods get fired up but in CrashLoopBackOff

microk8s kubectl -n ingress logs nginx-ingress-microk8s-controller-t8n8p
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.1.0
  Build:         cacbee86b6ccc45bde8ffc184521bed3022e7dee
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.9

-------------------------------------------------------------------------------

F0316 11:09:27.290937       8 main.go:67] port 80 is already in use. Please check the flag --http-port
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0x1)
        k8s.io/klog/[email protected]/klog.go:1026 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x28a5ba0, 0x3, {0x0, 0x0}, 0xc0000ad1f0, 0x1, {0x1f83e62, 0x28a66e0}, 0xc0000ae940, 0x0)
        k8s.io/klog/[email protected]/klog.go:975 +0x63d
k8s.io/klog/v2.(*loggingT).printDepth(0x1, 0x1, {0x0, 0x0}, {0x0, 0x0}, 0x444ed1, {0xc0000ae940, 0x1, 0x1})
        k8s.io/klog/[email protected]/klog.go:735 +0x1ba
k8s.io/klog/v2.(*loggingT).print(...)
        k8s.io/klog/[email protected]/klog.go:717
k8s.io/klog/v2.Fatal(...)
        k8s.io/klog/[email protected]/klog.go:1494
main.main()
        k8s.io/ingress-nginx/cmd/nginx/main.go:67 +0x1d3

goroutine 34 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
        k8s.io/klog/[email protected]/klog.go:1169 +0x6a
created by k8s.io/klog/v2.init.0
        k8s.io/klog/[email protected]/klog.go:420 +0xfb

There is noe "serverside" usage of port 80 that I can see, and no services created in microk8s either that should be a problem.

netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:42979         0.0.0.0:*               LISTEN      15883/containerd    
tcp        0      0 127.0.0.1:40869         0.0.0.0:*               LISTEN      1085/containerd     
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      18774/kubelite      
tcp        0      0 0.0.0.0:25000           0.0.0.0:*               LISTEN      14796/python3       
tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN      1159/python         
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      18774/kubelite      
tcp        0      0 127.0.0.1:9099          0.0.0.0:*               LISTEN      17155/calico-node   
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      18774/kubelite      
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      1030/systemd-resolv 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1270/sshd: /usr/sbi 
tcp        0      0 XXX.XXX.XXX.XXX:19001   0.0.0.0:*               LISTEN      14958/k8s-dqlite    
tcp        0      0 127.0.0.1:1338          0.0.0.0:*               LISTEN      15883/containerd    
tcp6       0      0 :::10250                :::*                    LISTEN      18774/kubelite      
tcp6       0      0 :::10255                :::*                    LISTEN      18774/kubelite      
tcp6       0      0 :::10257                :::*                    LISTEN      18774/kubelite      
tcp6       0      0 :::10259                :::*                    LISTEN      18774/kubelite      
tcp6       0      0 :::22                   :::*                    LISTEN      1270/sshd: /usr/sbi 
tcp6       0      0 :::16443                :::*                    LISTEN      18774/kubelite      
udp        0      0 127.0.0.53:53           0.0.0.0:*                           1030/systemd-resolv 
udp        0      0 0.0.0.0:4789            0.0.0.0:*                           -  
microk8s kubectl -n ingress describe pod nginx-ingress-microk8s-controller-mnphj
Name:         nginx-ingress-microk8s-controller-mnphj
Namespace:    ingress
Priority:     0
Node:         my-servername-n02/192.168.100.11
Start Time:   Wed, 16 Mar 2022 12:09:04 +0100
Labels:       controller-revision-hash=85d7cb8664
              name=nginx-ingress-microk8s
              pod-template-generation=1
Annotations:  cni.projectcalico.org/podIP: 10.1.26.1/32
              cni.projectcalico.org/podIPs: 10.1.26.1/32
Status:       Running
IP:           10.1.26.1
IPs:
  IP:           10.1.26.1
Controlled By:  DaemonSet/nginx-ingress-microk8s-controller
Containers:
  nginx-ingress-microk8s:
    Container ID:  containerd://cb8b9df98cb95dfe50703be7b54134cc2ea5693726292e5bc5187ca1085df9c2
    Image:         k8s.gcr.io/ingress-nginx/controller:v1.1.0
    Image ID:      sha256:ae1a7201ec9545194b2889da30face5f2a7a45e2ba8c7479ac68c9a45a73a7eb
    Ports:         80/TCP, 443/TCP, 10254/TCP
    Host Ports:    80/TCP, 443/TCP, 10254/TCP
    Args:
      /nginx-ingress-controller
      --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
      --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
      --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
      --ingress-class=public
       
      --publish-status-address=127.0.0.1
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Wed, 16 Mar 2022 12:35:25 +0100
      Finished:     Wed, 16 Mar 2022 12:35:25 +0100
    Ready:          False
    Restart Count:  10
    Liveness:       http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-microk8s-controller-mnphj (v1:metadata.name)
      POD_NAMESPACE:  ingress (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t6z6w (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-t6z6w:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason   Age                   From     Message
  ----     ------   ----                  ----     -------
  Warning  BackOff  4m2s (x132 over 29m)  kubelet  Back-off restarting failed container

Would appreciate it greatly if someone has some insight or ways to figure out what is going on here. I have tried curl 0.0.0.0:80 -vvv but that fails, so I really dont understand why it insists that port 80 is in use..

@mwilberg
Copy link
Author

Will try getting the inspection-report tarball uploaded later - if I'm allowed to upload it. Unzipped it and changed the hostnames/IP-addresses of all the servers... so should be "washed" enough ...

@neoaggelos
Copy link
Contributor

Hi @mwilberg, thank you for reporting the issue.

In the past, we have observed that ingress runs is running into issues when reinstalling MicroK8s.

A known workaround is rebooting the server. Is doing a (rolling) reboot of your servers possible? It should most likely resolve your issue.

@mwilberg
Copy link
Author

mwilberg commented Mar 17, 2022

In the past, we have observed that ingress runs is running into issues when reinstalling MicroK8s.

A known workaround is rebooting the server. Is doing a (rolling) reboot of your servers possible? It should most likely resolve your issue.

Thanks for the reply @neoaggelos,

The order in which I did the "upgrade" made it more like a reinstall than an upgrade really. All servers have been restarted multiple times, yet something seems to be preventing the addition of ingress

## First removed all nodes from the current cluster
> microk8s leave 
## Reset the current microk8s settings on every node
> microk8s reset 
## Restarted the nodes
> shutdown -r now
## Removed microk8s on the nodes
> snap remove microk8s 
## Restarted the nodes again for good meassure
> shutdown -r now 
## Reinstalled microk8s (newer version)
> snap ack microk8s_3021.assert
> snap install microk8s_3021.snap
## Imported the images I needed to get the base operation up and running
> microk8s ctr image import microk8s_3021_base_images.tar
> microk8s start
> microk8s enable rbac dns
> microk8s enable ingress  

This was the full procedure for cleaning up the cluster basically. Except I also removed everything from /var/lib/rook/ and did a sgdisk --zap-all /dev/sdc to prepare my disk for a new rook-ceph install... but I haven't really gotten back to that yet :)

Included a "modified" version of the inspection-report - so you will get a better idea what is working and what isn't.

inspection-report-20220316_140025.tar.gz

@balchua
Copy link
Collaborator

balchua commented Mar 21, 2022

@mwilberg i couldn't find anything fishy.
If you run a custom app using port 80 does it get the same error?

@mwilberg
Copy link
Author

@balchua Will give it a try later today - didn't really try much in terms of custom anything yet, wanted to get the basic functionality up and running first.

@stale
Copy link

stale bot commented Feb 15, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants