Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Integration with the new Docker 1.12 services #178

Closed
tpbowden opened this issue Jun 23, 2016 · 11 comments · May be fixed by #186
Closed

Feature request: Integration with the new Docker 1.12 services #178

tpbowden opened this issue Jun 23, 2016 · 11 comments · May be fixed by #186

Comments

@tpbowden
Copy link

WIth Docker 1.12, the way Swarm works will change quite a lot (and will look a lot more like Kubernetes), meaning services can be exposed on ports on all Swarm nodes and load balanced internally by Docker using a virtual IP or DNS round robin. As such, Interlock would no longer have to load balance across containers, but across physical swarm nodes.

For example, if you had 3 swarm nodes, A, B and C, and a service which is running on nodes A and C and assigned node port 30000, this would be accessible via any of the 3 swarm nodes on port 30000 regardless of whether the service is running on that machine and automatically load balanced between the 2 running containers.

Interlock could watch the event stream and all labelled services and create HAProxy/Nginx config to load balance across nodes A B and C to port 30000 using the hostname in the label.

Not sure if there are more things to take into consideration here, or if this is the job for another tool.

@ehazlett
Copy link
Owner

ehazlett commented Jul 8, 2016

Yes this is being worked on. Not sure on an ETA.

@tpbowden
Copy link
Author

tpbowden commented Jul 8, 2016

Had a quick go at this in https://github.com/tpbowden/swarm-ingress-router, managed to get something reasonable working without much work (it uses a native Go reverse proxy instead of nginx/haproxy however). You can just query for labelled services and then Docker will handle all load balancing and routing for you based on the DNS name of the service which resolves to a VIP. No need for external ports on anything except the proxy as long as all frontends are on the same overlay network.

@ehazlett
Copy link
Owner

ehazlett commented Jul 9, 2016

@tpbowden cool!

@ehazlett
Copy link
Owner

ehazlett commented Jul 9, 2016

There is now a PR (#186) that adds Swarm service support.

@ehazlett
Copy link
Owner

ehazlett commented Jul 9, 2016

You can use the ehazlett/interlock:pr-186 image for testing.

@ehazlett
Copy link
Owner

ehazlett commented Jul 9, 2016

There are also example docs in that PR (153bb9f)

@busbyjon
Copy link

If the swarm has 1 manager, and 1 node (and the node is NOT a manager). This tutorial works - then fails after about 15 minutes when the nginx config on the worker node is refreshed (with the error that it is not a manager node)

Promoting the node appears to solve this issue - but it probably something that you'd want to resolve - as it is challenging to troubleshoot and doesn't fail gracefully (it actually works for the first few minutes!)

@ehazlett
Copy link
Owner

Yes there is currently a limitation that Interlock must be on managers. I'm trying to work through a solution to get it working across all.

@Richard-Mathie
Copy link
Contributor

won't specifying --constraint node.role == manager on the interlock service solve this?

from https://docs.docker.com/engine/reference/commandline/service_create/#/set-service-mode

@riemers
Copy link

riemers commented Jul 19, 2016

@ehazlett is there also an option to add ssl certs via environment entry's? Not a path to the files but cat the files into the environment so you don't need to mount a volume.

@ehazlett
Copy link
Owner

@riemers not currently. however, with swarm secrets that would probably be the way to go for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants