-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better support for remote caching #33
Comments
We're going to take on the docker work in m20. |
heyho, small update from my side: I tried out recently https://github.com/GoogleContainerTools/kaniko which actually has explicit support for intelligent remote caching (without manual tag management as with docker build): https://github.com/GoogleContainerTools/kaniko#caching-layers. I was able to run it locally just fine and the caching seems to work pretty well with a remote cache (it just uses a docker registry as cache source) and obviously it is also made to run smoothly in a k8s cluster. In summary I think instead of the whole complicated cache optimizations that I've suggested above (when using Last but not least a couple of days https://github.com/uber/makisu came out which seems somewhat similar (but more immature) - it also has built-in support for intelligent remote caching. In summary if |
And one more observation from me: I tried out both kaniko and makisu now and must say that makisu seems to be much faster in most cases. Seems like it's snapshotting mechanism etc. is simply faster. So I prefer to integrate pulumi with makisu. |
@hausdorff is adding kaniko support something you’d be keen to see as an open source contribution? If it’s something that would be merged I might give it a crack :) |
Are there any updates on this? If you run a docker build from GitHub Actions directly, making use of the registry cache is as simple as:
But no such behaviour seems to be possible using the So although local builds seem to make use of the local docker cache and are reasonably fast, CI builds end up being very slow. |
@geekflyer I'm closing this tentatively as resolved with the new implementation of the Docker Image resource in v4! See our blog post for more info: https://www.pulumi.com/blog/build-images-50x-faster-docker-v4/ We do know that that multi-stage builds can be an issue: Please create a new issue if you find that the |
Related slack discussion: https://pulumi-community.slack.com/archives/C84L4E3N1/p1540854501162300
In order to speed up builds, both locally and in CI, it is advisable to use images that have been previously pushed to a remote registry as cache source.
There's a bunch of blogs which describe the general technique, i.e. https://medium.com/@gajus/making-docker-in-docker-builds-x2-faster-using-docker-cache-from-option-c01febd8ef84 .
Doing so does not only usually drastically speed up builds, it also speeds up deployment to target machines (i.e. a kubernetes cluster) because using the --cache-from technique produces images which have more common layers that are more likely to be present on a clusters' local docker cache upon deployment.
A very simple strategy is to tag images always with
latest
and use this tag as the --cache-from source. Prior to #31 it was possible to use this strategy with pulumi-docker. Since #31 this strategy is not possible anymore.So this issue is about getting back the ability to use this or some alternative good remote caching strategy.
Here's a suggestion for a more advanced strategy that should speed up builds even more than using the
latest
tag as cache source:git_sha
ofHEAD
<image_name>:<git_sha_HEAD>
. If it could successfully pull this, use it in--cache-from
. If not successful, attempt to pull<image_name>:<git_sha_HEAD~1>
and use this as--cache-from
(basically attempt to get an image that was built from an ancestor commit). Repeat this process until an image could be pulled successfully (maybe stop doing so after 5 iterations or untilHEAD~5
).TODO:
In either case pulumi-docker probably needs the ability to add and push an image with multiple tags in order to support #31 and remote caching at the same time.
The text was updated successfully, but these errors were encountered: