Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error: invalid memory address or nil pointer dereference #208

Open
yeruisen opened this issue Nov 10, 2016 · 6 comments
Open

Comments

@yeruisen
Copy link

yeruisen commented Nov 10, 2016

docker info

[root@manager ~]# docker info
Containers: 14
 Running: 11
 Paused: 0
 Stopped: 3
Images: 43
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
 node-1: 192.168.22.36:2375
  └ ID: CFDK:NXXT:XMIU:OTBX:2JDZ:L5WW:IOHF:KFNP:FA57:N25K:VZWE:O2JW
  └ Status: Healthy
  └ Containers: 8 (8 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 7.894 GiB
  └ Labels: kernelversion=3.10.0-327.36.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  └ UpdatedAt: 2016-11-11T08:20:09Z
  └ ServerVersion: 1.12.2
 node-2: 192.168.1.23:2375
  └ ID: MVS5:YUPK:UWOK:JFWK:GG4X:JSJF:AAUN:574K:AZKA:6CVB:EGFQ:BCAA
  └ Status: Healthy
  └ Containers: 6 (3 Running, 0 Paused, 3 Stopped)
  └ Reserved CPUs: 0 / 33
  └ Reserved Memory: 0 B / 65.85 GiB
  └ Labels: kernelversion=3.10.0-327.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  └ UpdatedAt: 2016-11-11T08:19:47Z
  └ ServerVersion: 1.12.2
Plugins:
 Volume:
 Network:
Swarm:
 NodeID:
 Is Manager: false
 Node Address:
Security Options:
Kernel Version: 3.10.0-327.36.2.el7.x86_64
Operating System: linux
Architecture: amd64
CPUs: 37
Total Memory: 73.74 GiB
Name: 66b63f955834
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
WARNING: No kernel memory limit support

docker-compose.yml

    interlock:
        image: ehazlett/interlock:master
        command: run -c /etc/interlock/config.toml
        ports:
            - 8080
        volumes:
            - ./config.toml:/etc/interlock/config.toml

    nginx:
        image: nginx:latest
        entrypoint: nginx
        command: -g "daemon off;" -c /etc/nginx/nginx.conf
        ports:
            - 80:80
        labels:
            - "interlock.ext.name=nginx"
        links:
            - interlock:interlock

config.toml

ListenAddr = ":8080"
DockerURL = "tcp://192.168.22.20:4000"

[[Extensions]]
Name = "nginx"
ConfigPath = "/etc/nginx/nginx.conf"
PidPath = "/var/run/nginx.pid"
TemplatePath = ""
MaxConn = 1024
Port = 80

In my Swarm cluster, When I run docker-compose up to start my app, interlock throw execption.

Creating interlock_interlock_1
Creating interlock_app_1
Creating interlock_nginx_1
Attaching to interlock_interlock_1, interlock_app_1, interlock_nginx_1
interlock_1  | time="2016-11-10T08:01:55Z" level=info msg="interlock 1.1.0 (8a68c99)"
interlock_1  | time="2016-11-10T08:01:55Z" level=info msg="interlock node: id=9ea0a3eba92b445b3165928c5664a57e676f3e2f25c5ae11b056bc12d7015723" ext=lb
app_1        | 2016/11/10 08:01:56 instance: 9206e0dfb108
app_1        | 2016/11/10 08:01:56 close connections: false
app_1        | 2016/11/10 08:01:56 listening on :8080
interlock_1  | time="2016-11-10T08:01:57Z" level=info msg="test.local: upstream=192.168.1.23:32978" ext=nginx
interlock_1  | time="2016-11-10T08:01:57Z" level=info msg="account.aaa.com: upstream=192.168.1.23:32958" ext=nginx
interlock_1  | time="2016-11-10T08:01:57Z" level=info msg="credit.aaa.com: upstream=192.168.22.36:32948" ext=nginx
interlock_1  | time="2016-11-10T08:01:57Z" level=info msg="legends.aaa.com: upstream=192.168.22.36:32946" ext=nginx
interlock_1  | time="2016-11-10T08:01:57Z" level=info msg="mdp.aaa.com: upstream=192.168.22.36:32940" ext=nginx
interlock_1  | time="2016-11-10T08:01:57Z" level=info msg="sso.aaa.com: upstream=192.168.22.36:32937" ext=nginx
interlock_1  | time="2016-11-10T08:01:57Z" level=info msg="restarted proxy container: id=d6e0dc15c55b name=/node-2/interlock_nginx_1" ext=nginx
interlock_1  | time="2016-11-10T08:01:57Z" level=info msg="reload duration: 189.84ms" ext=lb
interlock_1  | time="2016-11-10T08:02:06Z" level=info msg="test.local: upstream=192.168.1.23:32978" ext=nginx
interlock_1  | time="2016-11-10T08:02:06Z" level=info msg="account.tongbanjie.com: upstream=192.168.1.23:32958" ext=nginx
interlock_1  | time="2016-11-10T08:02:06Z" level=info msg="credit.tongbanjie.com: upstream=192.168.22.36:32948" ext=nginx
interlock_1  | time="2016-11-10T08:02:06Z" level=info msg="legends.tongbanjie.com: upstream=192.168.22.36:32946" ext=nginx
interlock_1  | time="2016-11-10T08:02:06Z" level=info msg="mdp.tongbanjie.com: upstream=192.168.22.36:32940" ext=nginx
interlock_1  | time="2016-11-10T08:02:06Z" level=info msg="sso.tongbanjie.com: upstream=192.168.22.36:32937" ext=nginx
interlock_1  | time="2016-11-10T08:02:06Z" level=info msg="restarted proxy container: id=d6e0dc15c55b name=/node-2/interlock_nginx_1" ext=nginx
interlock_1  | time="2016-11-10T08:02:06Z" level=info msg="reload duration: 215.02ms" ext=lb
interlock_1  | time="2016-11-10T08:02:07Z" level=info msg="test.local: upstream=192.168.1.23:32978" ext=nginx
interlock_1  | time="2016-11-10T08:02:07Z" level=info msg="account.tongbanjie.com: upstream=192.168.1.23:32958" ext=nginx
interlock_1  | time="2016-11-10T08:02:07Z" level=info msg="diamond.tongbanjie.com: upstream=192.168.22.36:32966" ext=nginx
interlock_1  | time="2016-11-10T08:02:07Z" level=info msg="credit.tongbanjie.com: upstream=192.168.22.36:32948" ext=nginx
interlock_1  | time="2016-11-10T08:02:07Z" level=info msg="legends.tongbanjie.com: upstream=192.168.22.36:32946" ext=nginx
interlock_1  | time="2016-11-10T08:02:07Z" level=info msg="mdp.tongbanjie.com: upstream=192.168.22.36:32940" ext=nginx
interlock_1  | time="2016-11-10T08:02:07Z" level=info msg="sso.tongbanjie.com: upstream=192.168.22.36:32937" ext=nginx
interlock_1  | time="2016-11-10T08:02:07Z" level=info msg="restarted proxy container: id=d6e0dc15c55b name=/node-2/interlock_nginx_1" ext=nginx
interlock_1  | time="2016-11-10T08:02:07Z" level=info msg="reload duration: 201.71ms" ext=lb
interlock_1  | panic: runtime error: invalid memory address or nil pointer dereference
interlock_1  | [signal 0xb code=0x1 addr=0xe8 pc=0x6b7cc7]
interlock_1  |
interlock_1  | goroutine 46 [running]:
interlock_1  | panic(0xba2240, 0xc820012060)
interlock_1  | 	/usr/local/go/src/runtime/panic.go:464 +0x3e6
interlock_1  | github.com/ehazlett/interlock/ext/lb.(*LoadBalancer).isExposedContainer(0xc820165ec0, 0xc82000a300, 0x40, 0x9)
interlock_1  | 	/go/src/github.com/ehazlett/interlock/ext/lb/lb.go:466 +0x477
interlock_1  | github.com/ehazlett/interlock/ext/lb.(*LoadBalancer).HandleEvent(0xc820165ec0, 0xc820210090, 0x0, 0x0)
interlock_1  | 	/go/src/github.com/ehazlett/interlock/ext/lb/lb.go:410 +0x291
interlock_1  | github.com/ehazlett/interlock/server.NewServer.func4(0xc8201b9b90)
interlock_1  | 	/go/src/github.com/ehazlett/interlock/server/server.go:125 +0x4c3
interlock_1  | created by github.com/ehazlett/interlock/server.NewServer
interlock_1  | 	/go/src/github.com/ehazlett/interlock/server/server.go:134 +0x2a2
interlock_interlock_1 exited with code 2
@nixelsolutions
Copy link

+1 we're affected by this issue tooo:

{"log":"panic: runtime error: invalid memory address or nil pointer dereference\n","stream":"stderr","time":"2016-12-02T11:28:04.594206106Z"}
{"log":"[signal 0xb code=0x1 addr=0x0 pc=0x95927e]\n","stream":"stderr","time":"2016-12-02T11:28:04.594234748Z"}
{"log":"\n","stream":"stderr","time":"2016-12-02T11:28:04.594241712Z"}
{"log":"goroutine 34 [running]:\n","stream":"stderr","time":"2016-12-02T11:28:04.594246934Z"}
{"log":"panic(0xba6720, 0xc820012060)\n","stream":"stderr","time":"2016-12-02T11:28:04.594251848Z"}
{"log":"\u0009/usr/local/go/src/runtime/panic.go:464 +0x3e6\n","stream":"stderr","time":"2016-12-02T11:28:04.594256711Z"}
{"log":"github.com/ehazlett/interlock/ext/lb/nginx.(*NginxLoadBalancer).GenerateProxyConfig(0xc82012ff80, 0xc8208ee000, 0x34b, 0x42a, 0x0, 0x0, 0x0, 0x0)\n","stream":"stderr","time":"2016-12-02T11:28:04.594262345Z"}
{"log":"\u0009/go/src/github.com/ehazlett/interlock/ext/lb/nginx/generate.go:35 +0x3f2e\n","stream":"stderr","time":"2016-12-02T11:28:04.594267543Z"}
{"log":"github.com/ehazlett/interlock/ext/lb.NewLoadBalancer.func4(0xc820133680, 0xc8201374a0, 0xc82004fa51, 0x40)\n","stream":"stderr","time":"2016-12-02T11:28:04.5942728Z"}
{"log":"\u0009/go/src/github.com/ehazlett/interlock/ext/lb/lb.go:207 +0x6b3\n","stream":"stderr","time":"2016-12-02T11:28:04.59427816Z"}
{"log":"created by github.com/ehazlett/interlock/ext/lb.NewLoadBalancer\n","stream":"stderr","time":"2016-12-02T11:28:04.594283246Z"}
{"log":"\u0009/go/src/github.com/ehazlett/interlock/ext/lb/lb.go:311 +0xbd5\n","stream":"stderr","time":"2016-12-02T11:28:04.594288131Z"}

@yeruisen
Copy link
Author

yeruisen commented Dec 5, 2016

@nixelsolutions Did you solve this problem?

@ehazlett
Copy link
Owner

ehazlett commented Dec 5, 2016 via email

@nixelsolutions
Copy link

@yeruisen nope, still suffering it.

@ehazlett this is the docker info output on swarm cluster:

# docker info
Containers: 1012
 Running: 968
 Paused: 0
 Stopped: 44
Images: 276
Server Version: swarm/1.2.3
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 13
 frontend-i-6ecbf986: 172.31.22.108:2375
  └ ID: THFI:R3F2:ZCRL:23WO:LNYP:QYKY:7VVS:2TOY:3YHC:VLXV:CQHV:CK4Z
  └ Status: Healthy
  └ Containers: 90
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 31.46 GiB
  └ Labels: executiondriver=, frontend.cloudprovider=amazon, frontend.datacenter=a, frontend.location=west1, frontend.region=eu, kernelversion=3.16.0-77-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=frontend
  └ UpdatedAt: 2016-12-05T10:34:51Z
  └ ServerVersion: 1.12.3
 frontend-i-27c8facf: 172.31.24.88:2375
  └ ID: WQJM:OQNX:5JI6:MLIT:VHHM:D45Q:BKM4:4SXD:HE6W:WXKP:GK47:YMGO
  └ Status: Healthy
  └ Containers: 91
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 31.46 GiB
  └ Labels: executiondriver=, frontend.cloudprovider=amazon, frontend.datacenter=a, frontend.location=west1, frontend.region=eu, kernelversion=3.16.0-77-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=frontend
  └ UpdatedAt: 2016-12-05T10:34:58Z
  └ ServerVersion: 1.12.3
 frontend-i-029e44c2: 172.31.32.82:2375
  └ ID: L6YN:47QC:2ZAB:3ZCB:UNVC:M357:GGWK:4OMC:5OGI:IZPF:R4M4:3PXA
  └ Status: Healthy
  └ Containers: 90
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 31.46 GiB
  └ Labels: executiondriver=, frontend.cloudprovider=amazon, frontend.datacenter=b, frontend.location=west1, frontend.region=eu, kernelversion=3.16.0-77-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=frontend
  └ UpdatedAt: 2016-12-05T10:34:36Z
  └ ServerVersion: 1.12.3
 frontend-i-084fbd9e: 172.31.0.107:2375
  └ ID: QBIV:AQVI:NGPZ:OOXS:6EGC:KOJX:WPJG:S4DZ:MWYJ:LPVK:I3JV:RE4I
  └ Status: Healthy
  └ Containers: 103
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 31.46 GiB
  └ Labels: executiondriver=, frontend.cloudprovider=amazon, frontend.datacenter=c, frontend.location=west1, frontend.region=eu, kernelversion=3.16.0-77-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=frontend
  └ UpdatedAt: 2016-12-05T10:34:49Z
  └ ServerVersion: 1.12.3
 frontend-i-414bb9d7: 172.31.0.157:2375
  └ ID: 2HDU:YZIA:WWOP:6RGI:B6NK:3LID:PVZ7:VT4D:4FE5:HCEK:FOGJ:ZXRN
  └ Status: Healthy
  └ Containers: 108
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 31.46 GiB
  └ Labels: executiondriver=, frontend.cloudprovider=amazon, frontend.datacenter=c, frontend.location=west1, frontend.region=eu, kernelversion=3.16.0-77-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=frontend
  └ UpdatedAt: 2016-12-05T10:35:03Z
  └ ServerVersion: 1.12.3
 frontend-i-594f9199: 172.31.47.22:2375
  └ ID: QEWD:J6F7:KNNU:RDQY:ZLX7:2CWN:J3LV:3UY3:LSWM:D2V3:NZIU:XYD7
  └ Status: Healthy
  └ Containers: 130
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 31.46 GiB
  └ Labels: executiondriver=, frontend.cloudprovider=amazon, frontend.datacenter=b, frontend.location=west1, frontend.region=eu, kernelversion=3.16.0-77-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=frontend
  └ UpdatedAt: 2016-12-05T10:34:40Z
  └ ServerVersion: 1.12.3
 frontend-i-5047b5c6: 172.31.14.139:2375
  └ ID: T3GY:CEGX:LMBZ:4QFR:A4F4:GSCK:2AOT:EX2F:YTKN:2JIU:2SAP:OU26
  └ Status: Healthy
  └ Containers: 116
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 31.46 GiB
  └ Labels: executiondriver=, frontend.cloudprovider=amazon, frontend.datacenter=c, frontend.location=west1, frontend.region=eu, kernelversion=3.16.0-77-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=frontend
  └ UpdatedAt: 2016-12-05T10:35:16Z
  └ ServerVersion: 1.12.3
 frontend-i-40449f80: 172.31.38.107:2375
  └ ID: N3HP:AU3J:3Q53:XEIC:M44O:QTIF:CPL6:3SSA:MDJ2:NDWG:FMZC:IBF4
  └ Status: Healthy
  └ Containers: 124
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 31.46 GiB
  └ Labels: executiondriver=, frontend.cloudprovider=amazon, frontend.datacenter=b, frontend.location=west1, frontend.region=eu, kernelversion=3.16.0-77-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=frontend
  └ UpdatedAt: 2016-12-05T10:34:44Z
  └ ServerVersion: 1.12.3
 frontend-i-bdcaf855: 172.31.26.133:2375
  └ ID: HJKJ:FHYV:2ZIU:YZCP:SSBI:TIUC:FJX4:VIBR:ICVD:YXWO:BV52:BE2K
  └ Status: Healthy
  └ Containers: 132
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 31.46 GiB
  └ Labels: executiondriver=, frontend.cloudprovider=amazon, frontend.datacenter=a, frontend.location=west1, frontend.region=eu, kernelversion=3.16.0-77-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=frontend
  └ UpdatedAt: 2016-12-05T10:35:27Z
  └ ServerVersion: 1.12.3
 loadbalancer-i-61003ae9: 172.31.23.161:2375
  └ ID: Z6B5:J4HP:BTRH:7RN5:47MU:2VTS:H7GN:BHKD:WQNG:5GXG:4LWH:5Q65
  └ Status: Healthy
  └ Containers: 9
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 3.86 GiB
  └ Labels: executiondriver=, kernelversion=3.16.0-71-generic, loadbalancer.cloudprovider=amazon, loadbalancer.datacenter=a, loadbalancer.location=west1, loadbalancer.region=eu, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=loadbalancer
  └ UpdatedAt: 2016-12-05T10:35:19Z
  └ ServerVersion: 1.11.2
 loadbalancer-i-ef982a65: 172.31.39.9:2375
  └ ID: YENW:YFP4:UCPG:OOHB:4N2D:MUE2:3YL2:BHIF:WORF:3ZFL:G5F5:63WQ
  └ Status: Healthy
  └ Containers: 9
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 3.86 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-67-generic, loadbalancer.cloudprovider=amazon, loadbalancer.datacenter=b, loadbalancer.location=west1, loadbalancer.region=eu, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs, target.service=loadbalancer
  └ UpdatedAt: 2016-12-05T10:34:41Z
  └ ServerVersion: 1.10.3
 management-i-53fca0d9: 172.31.42.70:2375
  └ ID: PPD3:XKPN:HL5F:OMLA:44WF:7DZ6:HTKZ:3GWM:TYC3:NYKM:NNLU:KY3Z
  └ Status: Healthy
  └ Containers: 5
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 3.86 GiB
  └ Labels: executiondriver=, kernelversion=3.16.0-71-generic, management.cloudprovider=amazon, management.datacenter=b, management.location=west1, management.region=eu, operatingsystem=Ubuntu 14.04.3 LTS, storagedriver=aufs, target.service=management
  └ UpdatedAt: 2016-12-05T10:35:03Z
  └ ServerVersion: 1.11.2
 management-i-65003aed: 172.31.23.171:2375
  └ ID: OLPU:CD6I:CJHU:PHBX:QRGS:YUE3:HZ4R:JUTV:YREK:4XHP:SRG6:WEKU
  └ Status: Healthy
  └ Containers: 5
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 3.86 GiB
  └ Labels: executiondriver=, kernelversion=3.16.0-71-generic, management.cloudprovider=amazon, management.datacenter=a, management.location=west1, management.region=eu, operatingsystem=Ubuntu 14.04.3 LTS, storagedriver=aufs, target.service=management
  └ UpdatedAt: 2016-12-05T10:34:33Z
  └ ServerVersion: 1.11.2
Plugins:
 Volume:
 Network:
Kernel Version: 3.13.0-79-generic
Operating System: linux
Architecture: amd64
CPUs: 40
Total Memory: 298.6 GiB
Name: 4b3716a14a45
Experimental: true

The swarm nodes are running docker 1.10.2 though.

@ehazlett
Copy link
Owner

I made a significant update to the docker client dependency as well as some other fixes. Can you try the latest ehazlett/interlock:dev to see if that fixes?

@boydj
Copy link

boydj commented Jul 14, 2017

Saw this happen in 1.3.2 but 1.4 appears to have resolved.

ehazlett added a commit that referenced this issue Aug 16, 2019
test/integration: Ignore extra docker service create output
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants