Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch Docker build to a glibc distro #6255

Closed
cgst opened this issue Dec 9, 2022 · 5 comments
Closed

Switch Docker build to a glibc distro #6255

cgst opened this issue Dec 9, 2022 · 5 comments
Labels

Comments

@cgst
Copy link
Contributor

cgst commented Dec 9, 2022

Rationale

The current Docker build is based on golang:1.19-alpine3.16, which is optimized for size but considered experimental.

Quoting from the Go project DockerHub:

golang:<version>-alpine

This variant is highly experimental, and not officially supported by the Go project (see golang/go#19938 for details).

The main caveat to note is that it does use musl libc instead of glibc and friends, which can lead to unexpected behavior. See this Hacker News comment thread for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.

The libc implementation is of particular interest to Erigon users because MDBX makes extensive use of POSIX file locks for shared access. An example unexpected behavior arises when you combine a glibc-based database reader (e.g., a locally-built cmd/hack) with a musl-based writer that holds a non-exclusive lock (e.g., thorax/erigon Erigon daemon running in the same process namespace). The reader hangs even though MDBX and Erigon are designed to support this use case. The workaround is to rebuild the reader with musl (difficult on most dev workstations) or rebuild the writer with glibc (i.e., give up on the official Docker image).

Users are free to build Erigon in any number of ways, and those who are sensitive to the libc implementation can roll their own build. Nevertheless, it's reasonable to ask if the default/official Docker build should be based on glibc or musl libc. It is my personal view that glibc is more widely supported and leads to fewer surprises, which makes for a more sensible default.

Implementation

Rebase the official Dockerfile on a glibc distro, such as Bullseye (Debian). I've tried it locally and it's a small change.

Considerations

  • Alpine Linux images are optimized for size, so just about anything else we use will be bigger. For example, Debian's bullseye-sim base is ~30MB vs Alpine's ~3MB.
  • Incompatibility goes both ways. It's conceivable others may expect or prefer a musl-based Erigon build.
@elee1766
Copy link
Contributor

elee1766 commented Dec 10, 2022

like abhishek mentioned on discord, would be best imo have both a debian based and alpine based image.

can keep the alpine based image as the default - won't break existing things

it seems the binaries are actually already build in glibc world? https://github.com/ledgerwatch/erigon/blob/devel/.goreleaser.yml but in the release.Dockerfile it's an alpine container.

not 100% sure on how erigon does their releases to dockerhub, or really how goreleaser works. I don't see them manually linking to musl in goreleaser. is the release container glibc compiled binaries running in alpine?

i think the adduser command needs to be changed for debian based build

@cgst
Copy link
Contributor Author

cgst commented Dec 13, 2022

Maybe @timmyers can chime in?

@elee1766
Copy link
Contributor

elee1766 commented Dec 15, 2022

so RUN adduser -D -u $UID -g $GID erigon will need to be changed to something along the lines of RUN adduser --uid $UID erigon

i moved our company erigon ci to a public repo and moved a node to run on the debian based image.

will report if i notice anything wrong with that node in comparison to our others. seems to be fine so far.

one improvement i found - doing the COPY binary by binary allows parallel upload+download of layers, has really increased the speed of downloading and uploading images. will maybe throw in a PR

AskAlexSharov pushed a commit that referenced this issue Dec 18, 2022
this PR makes many changes to the dockerfile in hopes of making it
faster to build, download, and upload.

1. Instead of copying the entire repository at once, it first copies the
go.mod and go.sum files, then runs go mod download. This allows the
dependencies to exist in their own layer, avoiding the need for the
build cache there.

2. the compilation of the db-tools is moved to a second image. Since
these are not often changed, not needing to rebuild them every time
makes things a lot faster for local development. It also reduces the
amount that is needed to be uploaded when creating new release - since
the db-tools layer will be unchanged

3. each binary is copied individually into its own layer. This allows
docker to upload/download each binary in parallel, along with better
recovery if the download of the existing 500mb layer fails (since it is
done in parts)

it also adds a second dockerfile which builds erigon with a debian
image, as a start to addressing #6255

while this dockerfile has a greater total image size, the total size of
different layers across versions will be smaller, resulting in smaller
effective upload & download sizes

with all that said - I am not really sure how the existing erigon
ci/release process works, so maybe these changes are incompatible with
it.


# comparison

## docker build speed
in both examples, i build erigon, then change a file in
core/blockchain.go (resulting in recompilation)

these are the produced logs

### CURRENT DOCKERFILE
```
[+] Building 70.1s (18/18) FINISHED
 => [internal] load build definition from Dockerfile                                                                     0.1s
 => => transferring dockerfile: 38B                                                                                      0.0s
 => [internal] load .dockerignore                                                                                        0.2s
 => => transferring context: 34B                                                                                         0.0s
 => resolve image config for docker.io/docker/dockerfile:1.2                                                             0.4s
 => CACHED docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e223  0.0s
 => [internal] load metadata for docker.io/library/alpine:3.16                                                           0.4s
 => [internal] load metadata for docker.io/library/golang:1.19-alpine3.16                                                0.4s
 => [builder 1/5] FROM docker.io/library/golang:1.19-alpine3.16@sha256:4b4f7127b01b372115ed9054abc6de0a0b3fdea224561b35  0.0s
 => [stage-1 1/5] FROM docker.io/library/alpine:3.16@sha256:b95359c2505145f16c6aa384f9cc74eeff78eb36d308ca4fd902eeeb0a0  0.0s
 => [internal] load build context                                                                                        0.1s
 => => transferring context: 111.58kB                                                                                    0.0s
 => CACHED [builder 2/5] RUN apk --no-cache add build-base linux-headers git bash ca-certificates libstdc++              0.0s
 => CACHED [builder 3/5] WORKDIR /app                                                                                    0.0s
 => [builder 4/5] ADD . .                                                                                                0.5s
 => [builder 5/5] RUN --mount=type=cache,target=/root/.cache     --mount=type=cache,target=/tmp/go-build     --mount=t  61.3s
 => CACHED [stage-1 2/5] RUN apk add --no-cache ca-certificates curl libstdc++ jq tzdata                                 0.0s
 => [stage-1 3/5] COPY --from=builder /app/build/bin/* /usr/local/bin/                                                   0.2s
 => [stage-1 4/5] RUN adduser -D -u 1000 -g 1000 erigon                                                                  0.8s
 => [stage-1 5/5] RUN mkdir -p ~/.local/share/erigon                                                                     1.0s
 => exporting to image                                                                                                   2.6s
 => => exporting layers                                                                                                  2.6s
 => => writing image sha256:948c68e8d2f64df2c4fa758a370b8de8c4aab65c91c3aeca96662ec8eafb7815                             0.0s

```
Since the downloading of dependencies is in the cache - rebuild time
does not suffer, but notice that it does not go into its own layer.

More importantly, since the db-tools are being rebuilt every time, an
extra 10-20s is added to the docker build time.

### NEW DOCKERFILE:
```
+] Building 52.6s (50/50) FINISHED
 => [internal] load build definition from Dockerfile                                                  0.3s
 => => transferring dockerfile: 38B                                                                   0.0s
 => [internal] load .dockerignore                                                                     0.2s
 => => transferring context: 34B                                                                      0.0s
 => resolve image config for docker.io/docker/dockerfile:1.2                                          0.4s
 => CACHED docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95  0.0s
 => [internal] load metadata for docker.io/library/alpine:3.16                                        0.5s
 => [internal] load metadata for docker.io/library/golang:1.19-alpine3.16                             0.5s
 => [tools-builder 1/9] FROM docker.io/library/golang:1.19-alpine3.16@sha256:4b4f7127b01b372115ed905  0.0s
 => [internal] load build context                                                                     0.1s
 => => transferring context: 279.70kB                                                                 0.0s
 => [stage-2  1/28] FROM docker.io/library/alpine:3.16@sha256:b95359c2505145f16c6aa384f9cc74eeff78eb  0.0s
 => CACHED [tools-builder 2/9] RUN apk --no-cache add build-base linux-headers git bash ca-certifica  0.0s
 => CACHED [tools-builder 3/9] WORKDIR /app                                                           0.0s
 => CACHED [builder 4/8] ADD go.mod go.mod                                                            0.0s
 => CACHED [builder 5/8] ADD go.sum go.sum                                                            0.0s
 => CACHED [builder 6/8] RUN go mod download                                                          0.0s
 => [builder 7/8] ADD . .                                                                             0.6s
 => [builder 8/8] RUN --mount=type=cache,target=/root/.cache     --mount=type=cache,target=/tmp/go-  39.7s
 => CACHED [stage-2  2/28] RUN apk add --no-cache ca-certificates libstdc++ tzdata                    0.0s
 => CACHED [stage-2  3/28] RUN apk add --no-cache curl jq bind-tools                                  0.0s
 => CACHED [stage-2  4/28] RUN adduser -D -u 1000 -g 1000 erigon                                      0.0s
 => CACHED [stage-2  5/28] RUN mkdir -p ~/.local/share/erigon                                         0.0s
 => CACHED [tools-builder 4/9] ADD Makefile Makefile                                                  0.0s
 => CACHED [tools-builder 5/9] ADD tools.go tools.go                                                  0.0s
 => CACHED [tools-builder 6/9] ADD go.mod go.mod                                                      0.0s
 => CACHED [tools-builder 7/9] ADD go.sum go.sum                                                      0.0s
 => CACHED [tools-builder 8/9] RUN mkdir -p /app/build/bin                                            0.0s
 => CACHED [tools-builder 9/9] RUN make db-tools                                                      0.0s
 => CACHED [stage-2  6/28] COPY --from=tools-builder /app/build/bin/mdbx_chk /usr/local/bin/mdbx_chk  0.0s
 => CACHED [stage-2  7/28] COPY --from=tools-builder /app/build/bin/mdbx_copy /usr/local/bin/mdbx_co  0.0s
 => CACHED [stage-2  8/28] COPY --from=tools-builder /app/build/bin/mdbx_drop /usr/local/bin/mdbx_dr  0.0s
 => CACHED [stage-2  9/28] COPY --from=tools-builder /app/build/bin/mdbx_dump /usr/local/bin/mdbx_du  0.0s
 => CACHED [stage-2 10/28] COPY --from=tools-builder /app/build/bin/mdbx_load /usr/local/bin/mdbx_lo  0.0s
 => CACHED [stage-2 11/28] COPY --from=tools-builder /app/build/bin/mdbx_stat /usr/local/bin/mdbx_st  0.0s
 => [stage-2 12/28] COPY --from=builder /app/build/bin/devnet /usr/local/bin/devnet                   0.4s
 => [stage-2 13/28] COPY --from=builder /app/build/bin/downloader /usr/local/bin/downloader           0.5s
 => [stage-2 14/28] COPY --from=builder /app/build/bin/erigon /usr/local/bin/erigon                   0.5s
 => [stage-2 15/28] COPY --from=builder /app/build/bin/erigon-cl /usr/local/bin/erigon-cl             0.5s
 => [stage-2 16/28] COPY --from=builder /app/build/bin/evm /usr/local/bin/evm                         0.4s
 => [stage-2 17/28] COPY --from=builder /app/build/bin/hack /usr/local/bin/hack                       0.4s
 => [stage-2 18/28] COPY --from=builder /app/build/bin/integration /usr/local/bin/integration         0.4s
 => [stage-2 19/28] COPY --from=builder /app/build/bin/lightclient /usr/local/bin/lightclient         0.5s
 => [stage-2 20/28] COPY --from=builder /app/build/bin/observer /usr/local/bin/observer               0.4s
 => [stage-2 21/28] COPY --from=builder /app/build/bin/pics /usr/local/bin/pics                       0.4s
 => [stage-2 22/28] COPY --from=builder /app/build/bin/rpcdaemon /usr/local/bin/rpcdaemon             0.4s
 => [stage-2 23/28] COPY --from=builder /app/build/bin/rpctest /usr/local/bin/rpctest                 0.4s
 => [stage-2 24/28] COPY --from=builder /app/build/bin/sentinel /usr/local/bin/sentinel               0.3s
 => [stage-2 25/28] COPY --from=builder /app/build/bin/sentry /usr/local/bin/sentry                   0.4s
 => [stage-2 26/28] COPY --from=builder /app/build/bin/state /usr/local/bin/state                     0.5s
 => [stage-2 27/28] COPY --from=builder /app/build/bin/txpool /usr/local/bin/txpool                   0.5s
 => [stage-2 28/28] COPY --from=builder /app/build/bin/verkle /usr/local/bin/verkle                   0.5s
 => exporting to image                                                                                1.5s
 => => exporting layers                                                                               1.3s
 => => writing image sha256:7c577386242d539b77f45774ac2800dd449ffc9f187387a4a69ad0cd79fc9b04          0.0s
 => => naming to docker.io/library/erigon                                                             0.0s
```

since dependencies and db-tools versions didnt change - all those layers
are cached, and did not need to rebuild/redownload

an additional advantage - build tools that are able to share cached
layers (such as kaniko or gitlab runner) are able to share dependency
layers automatically between runs, either sequential or concurrent,
while using mounts are an extra piece that needs to be configured, and
is not possible to share for concurrent builds

## docker push/pull speed 

see this example of the image pushing to a docker repo


### CURRENT DOCKERFILE

```
The push refers to repository [cr.gfx.cafe/images/erigon/test]
51af77f8740b: Pushing  4.096kB
fb257f924975: Pushing [==================================================>]  11.78kB
9057ae9f6ad6: Pushing [>                                                  ]   17.8MB/962.8MB
0ffb38bafc9e: Pushing [=================================>                 ]  4.338MB/6.477MB
e5e13b0c77cb: Layer already exists
```

the existing image can only be uploaded in a single layer, and it is
very big. if the upload fails part way through - the entire upload is
aborted, and i must try again. It is the same with the download

### new image
```
The push refers to repository [cr.gfx.cafe/images/erigon/test]
ac47c1bb87c6: Pushing [===========================>                       ]  7.866MB/14.22MB
2ba8ef6b2d4f: Pushing [==========>                                        ]  9.177MB/45.08MB
49ab36df341c: Pushing [===========>                                       ]  11.31MB/48.23MB
71f41bc3c4d4: Pushing [===========>                                       ]  15.57MB/68.94MB
7f40d9db27c5: Pushing [=====>                                             ]  8.325MB/82.98MB
20866e83eb57: Waiting
f15875fce722: Waiting
eb299c01a4b0: Waiting
b5f45cfe93d4: Waiting
15054c0c5515: Waiting
1b757dfa7311: Waiting
8e1176a93523: Waiting
53cf053c5cd7: Waiting
a88382869dce: Waiting
5380564abef3: Waiting
8b49a1ab1232: Waiting
774dcc434c98: Waiting
1568598ebd63: Waiting
6d1ef72c9409: Preparing
1fccdb04baaa: Waiting
2a3531caafa0: Waiting
51d43a55eebb: Waiting
b94f90c4bd95: Waiting
f49e2054b147: Waiting
65324ece5c8a: Waiting
5d448d0b43e8: Waiting
38c55858fb7a: Waiting
e5e13b0c77cb: Waiting
```

since the image is broken up into many small parts - the upload can
happen in parallel, which is faster. Along with this, we can resume
after a failure in upload, since we are uploading smaller chunks
@github-actions
Copy link

This issue is stale because it has been open for 40 days with no activity. Remove stale label or comment, or this will be closed in 7 days.

@github-actions github-actions bot added the Stale label Jan 25, 2023
@github-actions
Copy link

github-actions bot commented Feb 1, 2023

This issue was closed because it has been stalled for 7 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants