-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
always display image hashes #1053
Comments
There is no image hash for an intermediate cache because it is not exported to the docker image store. maybe we should have We can also consider having |
Intermediate build images are indeed cached in the local docker image store, I use them all the time to find out what happened at a particular step. Every single docker line creates a permanent cached image, and can be ran using I will think about your other comments when I get back to work. But, one thought is to drop to a shell at a particular step, like |
Oh, and printing that small hash on the screen every time improves "usability", in that people don't have to know some special parameter to debug build images. If it were the left most item, it would also then be in the same spot every time, the same size every time (resulting in a more columnar output), and therefore improve readability of the output as well. |
I am also trying to debug a docker build and expected all intermediary builds to be exported as an image, since that was the previous behaviour. @AkihiroSuda Is there currently a way to look into the filesystem after the last successful build step? |
@niklasbuechner I don't expect build kit is changing the behaviour of the building of intermediate cached images. I expect it's just not displaying the hash ids. What I don't know is if there's a way of listing them after the fact, for workaround purposes. |
Yeah, setting DOCKER_BUILDKIT=1 disimproves debug experience a lot. Without DOCKER_BUILDKIT, it was possible to run a container with a layer right before the failed step, and with DOCKER_BUILDKIT it's not possible anymore since it just doesn't show image cached IDs. This looks like a clear regression... |
@AkihiroSuda, could you please advise us, is there anything changes regarding debugging of failed builds while DOCKER_BUILDKIT=1? How do you debug such cases in your own practice? I.e. if the build fails, I'd like to somehow spawn the container with the layer before the failed step and bash there. Previously it was possible with |
My own (bad) practice is |
@dko-slapdash I do not think you can debug them. It's a serious failing of the build kit feature. |
@AkihiroSuda is there some alternative approach or there is none? If not, how much is it painful in practice, are you satisfied with it in general or not quite? (Just curious.) |
Not satisfied, but can't come up with other approach that works with existing releases |
This simple pointer saved my day. I knew about |
I don't understand this technique - I added |
|
And how do I get the |
ps auxw |
Oh I see, just find it in the standard running processes. Sounds good. Thanks. |
Is there any way to export already built layers to the output specified for debugging ? |
Not yet, contribution is wanted. |
@AkihiroSuda Hmmm, I wonder how tough it is to build on my MacOS. Do I need to build all of docker, or just buildkit, to try out making a contribution? |
Should be quite easy if you have Docker for Mac.
Just BuildKit. |
Sweet, thanks for the quickstart. I don't know go, but I'm just going to go! Not promising anything. 😛 |
My bad practice is notepad debugging, I comment out from the dockerfile everything after the failing step. Then I build. Then I apply steps manually and add them as RUN commands to the docker file. |
Ouch, that sounds very painful for certain docker images that take a long time. :( |
Could we get a somewhat official answer on which of the following are true, in the context of building with BuildKit:
|
"There is no image hash for an intermediate cache because it is not exported to the docker image store." is correct. Never doubt @AkihiroSuda Closing in favor of #1472 |
@tonistiigi That's not true, intermediate builds are cached, and available to "docker run". I've used them countless times to figure out what's wrong. In fact, docker wouldn't even work if they weren't, because docker re-uses them with every build. If they were not cached, docker would have to rebuild your image repeatedly, every time you run docker build |
@TrentonAdams That's non-BuildKit mode |
lol, I was just going to ask that question, you beat me to it. |
So, how does docker cache the image then, cause if it didn't, you'd have to rebuild every time. Is that a special cache of buildkit then? |
Perhaps this Issue title "Always display image hashes" is a bit misguided. Regardless of whether intermediate filesystem layer content hashes are output, there is an implicit usability feature here that many Docker users have stumbled upon. Due to not displaying (or computing) intermediate hashes, BuildKit builds are harder to debug, whereas normal Docker layer IDs provided a helpful feature for developers to use that was helpful to debug a failed build.
Good question! I'm not 100% sure, but given that buildkit/solver design docs state:
It seems like it tries to avoid computing checksums too often (which speeds up builds). However, this could also be the reason they say that intermediate layers are not stored in the usual Docker image store. Conceivably intermediate layers are stored somewhere, and could be exported (given that The "user story" way to state this feature might be:
|
See this linked issue |
for those stuck here, a temporary work-around is docker-compose, which (as of writing, v1.29.2) still doesn't use build kit when you do |
|
why is this issue closed? building with the |
@kneekey23 Consolidated into #1472 I believe |
It's tough to debug docker building when I can't just get into the previously successful intermediate build image and run the next command manually...
I would therefore argue that image hashes should always display, just like they do in the current docker.
The text was updated successfully, but these errors were encountered: