Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

PVF: separate worker binaries and build with musl #7147

Closed
wants to merge 14 commits into from

Conversation

mrcnski
Copy link
Contributor

@mrcnski mrcnski commented Apr 28, 2023

PULL REQUEST

Overview

This PR does several related things:

  1. Refactors the workers into their own crates (pvf/prepare-worker and pvf/execute-worker), which do not depend on the pvf host crate (pvf).
  2. Implements a binary-builder crate at pvf/binary-builder that builds the worker binaries as part of the cargo process.
  3. Embeds the built binaries into polkadot at compile time, and extracts them at runtime.

More info below.

1. Worker refactor

This PR does an additional refactor on top of #7101.

The worker binaries must be built first and embedded in polkadot so that the host could extract them. Therefore pvf/prepare-worker and pvf/execute-worker could not depend on pvf, so to remove the dependency, common functionality was extracted into pvf/common.

Integration tests were moved from pvf/worker/tests to pvf/tests because they need the PVF host.

2. binary-builder

I initially tried some hacks to build the binaries here, but eventually realized that in order to be able to access the built binaries at compile time (through the OUT_DIR env var) we needed something like Substrate's wasm-builder. So I copy/pasted the wasm-builder code, removed references to "wasm", and re-jigged it to make it work with non-wasm targets.

It will use the current target to build, unless we're on a "secure-mode" platform (atm just Linux x86_64) in which case it tries to build with musl.

Reviewers may find it helpful to diff binary-builder with Substrate's wasm-builder.

3. worker binary extraction

This is fairly straightforward. If the binaries are not already present, we decompress with zstd and write them to a tmp the database location (similar to artifacts), with executable permissions.

The only gotcha here is that we still have to support the old program_path parameter which is still used for puppet binaries in tests. This is now an Option. (Now that I think about it we can make it #[cfg(test)].)

TODO

  • Fix some issues when building with musl
  • Fix some minor compile errors
  • Address some todo's in code (most can be follow-up issues)
  • Document
  • Gate program_path parameter behind #[cfg(test)]?

Related issues

Closes paritytech/polkadot-sdk#650

mrcnski added 13 commits April 13, 2023 13:01
The worker binaries must be built first so that the host could embed them into `polkadot`. Therefore `pvf/worker` could not depend on `pvf`, so to remove the dependency, common functionality was extracted into `pvf/common`.

(NOTE: We already needed to do this host/worker/common separation as part of https://github.com/paritytech/polkadot/issues/7116, it's just unfortunate that it had to be done here and complicate this PR.)

Integration tests were moved from `pvf/worker/tests` to `pvf/tests` because they need the PVF host.
@mrcnski mrcnski added A0-please_review Pull request needs code review. B0-silent Changes should not be mentioned in any release notes A4-companion A PR that needs a companion PR to merge in parallel for one of its downstream dependencies. C1-low PR touches the given topic and has a low impact on builders. D3-trivial 🧸 PR contains trivial changes in a runtime directory that do not require an audit. T4-parachains_engineering This PR/Issue is related to Parachains performance, stability, maintenance. E5-needs_cumulus_pr This is an issue that needs to be implemented upstream in Cumulus. labels Apr 28, 2023
@slumber
Copy link
Contributor

slumber commented Apr 28, 2023

Refactors the workers into their own crates (pvf/prepare-worker and pvf/execute-worker), which do not depend on the pvf host crate (pvf).

Why do we need separate binaries instead of a single one, given the code is pretty compact?

@@ -851,15 +892,83 @@ fn pulse_every(interval: std::time::Duration) -> impl futures::Stream<Item = ()>
.map(|_| ())
}

// TODO: Should we purge unneeded binaries?
// TODO: Test on windows.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're using POSIX sockets API for node-worker communication, windows is long not supported

@s0me0ne-unkn0wn
Copy link
Contributor

Wow, you decided to just implement it instead of discussing infinitely on the forum? I love it! 😁

If the binaries are not already present, we decompress with zstd

And if they are already there, how do you know if they are of the same version as the node? And if they are worker binaries at all, not malicious binaries? Probably it's a good idea to either check hashes or always overwrite them?

write them to a tmp location with executable permissions

Are we guaranteed to have a writeable location? For sure we do for storage purposes, but is that location guaranteed to allow executable files to be written (it can have noexec mount flag and it's perfectly okay right now)? I believe it should be evaluated.

@s0me0ne-unkn0wn
Copy link
Contributor

CC @wpank (affects validators UX significantly)

@mrcnski
Copy link
Contributor Author

mrcnski commented Apr 28, 2023

Why do we need separate binaries instead of a single one, given the code is pretty compact?

One binary may have functionality/dependencies the other one does not etc. With distinct binaries we can apply an appropriate level of sandboxing to each one separately. There are other benefits I liked, such as having a separate LOG_TARGET for each one, and I also think codewise it's a nicer way to organize it.

And if they are already there, how do you know if they are of the same version as the node? And if they are worker binaries at all, not malicious binaries? Probably it's a good idea to either check hashes or always overwrite them?

We do include the version in the file name, but checking expected hashes is a good idea. I'll do it in a follow-up.

Are we guaranteed to have a writeable location? For sure we do for storage purposes, but is that location guaranteed to allow executable files to be written (it can have noexec mount flag and it's perfectly okay right now)? I believe it should be evaluated.

Really good questions. Maybe we should first try the database dir, then try the directory we are executing from (with the polkadot binary), and ultimately if neither work just fail with an error message?

(affects validators UX significantly)

I don't see how it does. It's still a single binary from their perspective. I think it's very likely that we have to split out the binaries for our security work, and this is a nice solution that doesn't require changes from the validators or our release process.

@sandreim
Copy link
Contributor

sandreim commented Apr 28, 2023

Why do we need separate binaries instead of a single one, given the code is pretty compact?

One binary may have functionality/dependencies the other one does not etc. With distinct binaries we can apply an appropriate level of sandboxing to each one separately. There are other benefits I liked, such as having a separate LOG_TARGET for each one, and I also think codewise it's a nicer way to organize it.

Why can't sandboxing level set dynamically based on which functionality is invoked from a single binary ? Also separate log targets can be achieved without separate binaries.

@mrcnski
Copy link
Contributor Author

mrcnski commented Apr 28, 2023

Why can't sandboxing level set dynamically based on which functionality is invoked from a single binary ?

We could, but @koute's syscall script analyzes the entire binary, so it would pick up things common to both workers, which is more than strictly necessary. And he mentioned ROP gadgets: smaller binaries means a lower surface area for attacks.

Also separate log targets can be achieved without separate binaries.

Oh, when I tried with a configurable log_target I got an error message from gum about a const item being expected, or something like that.1

Stepping back, why would we want to have both workers in the same binary?

Really good questions. Maybe we should first try the database dir, then try the directory we are executing from (with the polkadot binary), and ultimately if neither work just fail with an error message?

I've been thinking about this and I'm now unsure about extracting executables. Mounting /tmp with noexec seems like a good practice. And validators clamping down on permissions is something we would certainly want to encourage, and not make impossible.

Footnotes

  1. Suppose feature flags could be used, but just having two separate crates seems cleaner.

@mrcnski
Copy link
Contributor Author

mrcnski commented May 1, 2023

Nice, and I am surprised to find (at a cursorsy glance) that this is also possible on MacOS. I will have to research it more this week though.

It actually does not seem possible on MacOS because there is no equivalent of fexecve. And even getting a fd in the first place is a hack. But for this platform we can create a temp file, execute it and unlink it right after to remove it from the file system. If that fails we can print an error telling the user to loosen up permissions, which on this platform is fine.

On Linux there are no issues. With memfd_create I found out that we can also do file sealing, which is a nice bonus for security.

@bkchr
Copy link
Member

bkchr commented May 1, 2023

@bkchr I do see your point. IMO they are distinct/independent processes and can be split, despite some shared code. And as mentioned, splitting the binaries is better for sandboxing anyway and I think it should be done.

But currently we only want to achieve that we can run something and have logging enabled? I just realized that instead of separating the workers etc right now, we could just compile the entire node for musl. Then we can start running with sandboxing and log the syscalls being done. In parallel we (you) can then look into KVM. Should speed up the entire process quite a bit.

@mrcnski
Copy link
Contributor Author

mrcnski commented May 2, 2023

@bkchr I thought based on this comment that splitting the binaries should be done before KVM. So my plan was to do this, enable the logging, and then switch to KVM. Anyway, with this PR being close to finished I don't think it should be abandoned now.

@bkchr
Copy link
Member

bkchr commented May 2, 2023

  1. It was just some random idea by me at the weekend :P
  2. I would also not abandon the pr.

It was just about enuring that we can have this running everywhere without needing any changes by operators right now. But on Linux with memfd_create it should work fine.

@mrcnski
Copy link
Contributor Author

mrcnski commented May 14, 2023

Update: some issues were discovered when working on "3. worker binary extraction". After discussing with @s0me0ne-unkn0wn and @eskimor, it seems better not to do this and to instead properly distribute multiple binary files; I opened https://forum.polkadot.network/t/ux-of-distributing-multiple-binaries-take-2/2854.

In the meantime, I'll proceed along these lines (#7147 (comment)):

I just realized that instead of separating the workers etc right now, we could just compile the entire node for musl. Then we can start running with sandboxing and log the syscalls being done. In parallel we (you) can then look into KVM. Should speed up the entire process quite a bit.

I'll close this PR if there are no objections. I'll raise another PR with just the "1. Worker refactor" part.

@mrcnski mrcnski self-assigned this May 15, 2023
@mrcnski
Copy link
Contributor Author

mrcnski commented May 16, 2023

The issues with "3. worker binary extraction" are:

  1. No good way to do it on MacOS. There are/were some super hacky ways to do a memfd_create equivalent, used by malware, but that's terrible and might get patched at any time. Writing the executable to disk and deleting it after spawning the process seemed like it should work, based on the manpage for unlink, but it didn't. So we would need more custom code for Mac to keep track of and clean up the binaries written to disk, but only when they're no longer being used. And the sentiment was that we didn't want to deal with binaries lying around on the FS, e.g. here.
  2. Related to the above, there would have been significant code divergance between Linux and Mac. In addition to different spawning mechanisms, the WorkerHandle code would need to updated to support Linux, where we'd have raw handles instead of std::process instances. TBH I didn't get this far because of the issues on Mac.
  3. Rebuilds were constantly being triggered for the PVF worker crates and all depending crates. My guess is that rust-analyzer was running in the background and affecting these flags, triggering recompilations. This was a very painful UX and would definitely need to be fixed, if possible.
  4. It's hacky and was giving me a guilty conscience. 🤷‍♂️ Seriously, all this complexity is bad for maintainability and understanding the codebase. But I gave it a best effort because we all agreed on the approach and the issues were hard to foresee in advance.

Maybe the above issues can be worked around too, but it's turning into multiple layers of hacks. Would be much better to use standard mechanisms that work on both platforms.

@Polkadot-Forum
Copy link

This pull request has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/ux-of-distributing-multiple-binaries-take-2/2854/8

@bkchr
Copy link
Member

bkchr commented May 16, 2023

  • Rebuilds were constantly being triggered for the PVF worker crates and all depending crates. My guess is that rust-analyzer was running in the background and affecting these flags, triggering recompilations. This was a very painful UX and would definitely need to be fixed, if possible.

  • It's hacky and was giving me a guilty conscience. man_shrugging Seriously, all this complexity is bad for maintainability and understanding the codebase. But I gave it a best effort because we all agreed on the approach and the issues were hard to foresee in advance.

Should be solvable by: https://doc.rust-lang.org/nightly/cargo/reference/unstable.html#artifact-dependencies

  • No good way to do it on MacOS. There are/were some super hacky ways to do a memfd_create equivalent, used by malware, but that's terrible and might get patched at any time. Writing the executable to disk and deleting it after spawning the process seemed like it should work, based on the manpage for unlink, but it didn't. So we would need more custom code for Mac to keep track of and clean up the binaries written to disk, but only when they're no longer being used. And the sentiment was that we didn't want to deal with binaries lying around on the FS, e.g. here.

  • Related to the above, there would have been significant code divergance between Linux and Mac. In addition to different spawning mechanisms, the WorkerHandle code would need to updated to support Linux, where we'd have raw handles instead of std::process instances. TBH I didn't get this far because of the issues on Mac.

Unpacking to base-path or requiring some --worker-extract-path if base-path is not executable doesn't sound that complicated to me? For the old binaries we can do the same as for the pvf cache, purge it on start. Write out the binaries using some random generated name to ensure we always have different worker binaries per running node instance. This prevents that we accidentally "mix" different worker/node versions which is actually already solved by passing the node version, but better be safe than sorry. We should be able to use the same idea on every OS.

@koute
Copy link
Contributor

koute commented May 17, 2023

Writing the executable to disk and deleting it after spawning the process seemed like it should work, based on the manpage for unlink, but it didn't.

AFAIK that should work even on macOS? What exactly was happening? Was the process failing to launch? Was unlink failing with an error?

Related to the above, there would have been significant code divergance between Linux and Mac.

There will be anyway with the sandboxing, and to me this seems like a relatively easy thing to just abstract over in single OS-specific module, no?

Rebuilds were constantly being triggered for the PVF worker crates and all depending crates.

That's not really specific to extracting worker binaries though, just to the way we were building them. (: Even if we'll go with multiple binaries we'll still have to build them when the polkadot binary is built, or I guess we could make cargo running in polkadot not work without manually specifying the paths to the prebuilt workers, but that would be a pretty bad UX.


Anyway, I'm not really convinced. To me having multiple separate binaries brings out its own set of problems:

  1. Deployment pain. Right now I can build a polkadot binary, scp it into a server and I'm done. How will this look with multiple binaries? Will I have to manually build each one and copy them one-by-one?
  2. What will happen when I cargo run inside of polkadot? Will I have to remember to cargo build each binary before I cargo run the main polkatdot binary? Will I have to pass a path specifying where they are? If yes - then that's pain, if no - how will it automatically find them (we'd need to special case it)? And what if I'll forget to build them? Or what if I have the binaries built, but they're old? (I guess eventually we could use the artifact dependencies as Basti mentioned.)
  3. Similar thing can happen in production: what will happen if someone copies a new polkadot but still has old worker binaries? What if they'll replace them while the node's running? We'll need to verify that the versions match, or we'll get similar incidents to the recent one where the cache was stale and we got those dispute storms.

All in all I personally don't really like the idea of distributing multiple binaries as it just complicates deployment and, more importantly, pushes extra complexity on both our users and ourselves (we embed the WASM runtimes into the binary for a reason, and those reasons also apply here), instead of just isolating it into one part of the code, so I'd rather not do that unless there are some very convincing arguments, and so far I'm not really convinced. (:


Also, a side note: if the end game is to have the workers sandboxed using KVM then there's not much point in distributing the binaries separately because they won't have to be normally executed anyway. (But I guess you could argue that we'll still need a fallback anyway if KVM's not available.)

@mrcnski
Copy link
Contributor Author

mrcnski commented May 17, 2023

Should be solvable by: doc.rust-lang.org/nightly/cargo/reference/unstable.html#artifact-dependencies

Thanks, that might be useful if we stick to this approach. Have you used it before? Would we need more hacks?

Unpacking to base-path or requiring some --worker-extract-path if base-path is not executable doesn't sound that complicated to me? For the old binaries we can do the same as for the pvf cache, purge it on start. Write out the binaries using some random generated name to ensure we always have different worker binaries per running node instance. This prevents that we accidentally "mix" different worker/node versions which is actually already solved by passing the node version, but better be safe than sorry. We should be able to use the same idea on every OS.

We discussed that we don't want to use the FS for this in secure mode. For example here I mentioned that we should encourage secure validators to clamp down on permissions, so spawning from disk wouldn't work in that case. About the additional option, ideally spawning the processes doesn't infect more code than it has to and especially not the UX.

AFAIK that should work even on macOS? What exactly was happening? Was the process failing to launch? Was unlink failing with an error?

The process was launching but there weren't any errors. I wasn't able to diagnose it, but my guess was that some bug or security feature was introduced since the unlink manpages were written. I've run into a similar issue before.

Actualy, I just tried to poll the WorkerHandle to see if the process was still alive, and that seems to have fixed it!

match poll!(&mut handle) {
    Poll::Ready(_) => println!("ready"),
    Poll::Pending => println!("pending"),
}

I guess this is enough to keep the process alive. If this is the last hack we need, I might be okay with this...

There will be anyway with the sandboxing, and to me this seems like a relatively easy thing to just abstract over in single OS-specific module, no?

Well with the sandboxing there are just two codepaths, secure mode and non-secure mode. With additional Mac-specific code there would be secure mode, Linux nonsecure mode, and Mac nonsecure mode. But yeah, I am keeping everything isolated to modules where possible, but if we also needed some cleanup code, that would infect the startup code in addition to process spawning. Even with good isolation of the code, it's more that we have to document and test, and that readers have to understand when the abstraction inevitably leaks.

That's not really specific to extracting worker binaries though, just to the way we were building them. (: Even if we'll go with multiple binaries we'll still have to build them when the polkadot binary is built

If we build the whole node with musl as @bkchr suggested then we wouldn't need the binary builder at all, the workers would just be separate binary crates that are built in the regular cargo build. Otherwise, I guess we can try to fix the binary builder.

To me having multiple separate binaries brings out its own set of problems:

For (1), we are having a discussion about how to distribute multiple files. I honestly don't see the issue with it, it's a standard practice. (2) I think is addressed if we build the binaries during the cargo build?

We'll need to verify that the versions match

We have this. :) #6861

(we embed the WASM runtimes into the binary for a reason, and those reasons also apply here)

I mean honestly, the wasm-builder is extremely hacky, although I guess it's worked fine so far... But it's not guaranteed to work forever, and every time a new cargo version comes out there is a chance of it breaking. So maybe it would be better to distribute the WASM runtimes separately. 🤷‍♂️ We could have similar version mismatch checks that we already have for the workers.

I don't think the single-binary approach is sustainable in the long-term. Are we going to keep piling hacks on top of each other for the next 20 years? We should do this properly, and then we will all sleep better at night. But that is just my opinion as a relative newcomer to the project.

@mrcnski
Copy link
Contributor Author

mrcnski commented May 17, 2023

One thing I forgot to mention is that the binary extraction breaks the PVF tests on Mac, particularly the ones that rely on specific timing information. We extract the binaries on-demand for each job, which adds 1-2s to each test (on my machine, it will differ depending on hardware). I guess we could add test-specific code to only do it once before all tests are run and clean up after, if that's possible, though that's even more complexity. I guess we could disable the timing-specific ones on Mac and just live with slow tests on this platform.

@bkchr
Copy link
Member

bkchr commented May 17, 2023

Would we need more hacks?

Did you looked at it?

We discussed that we don't want to use the FS for this in secure mode. For example here I mentioned that we should encourage secure validators to clamp down on permissions, so spawning from disk wouldn't work in that case. About the additional option, ideally spawning the processes doesn't infect more code than it has to and especially not the UX.

But you only run the workers in secure mode? The node itself needs access to write to storage. You first write out the binary and then execute it in "secure mode". Not sure what else you mean.

I mean honestly, the wasm-builder is extremely hacky, although I guess it's worked fine so far... But it's not guaranteed to work forever, and every time a new cargo version comes out there is a chance of it breaking. So maybe it would be better to distribute the WASM runtimes separately. man_shrugging We could have similar version mismatch checks that we already have for the workers.

We don't use any undocumented features or whatever of cargo. Not sure why it should break? We only use what is available. You can also not do any "version" checks. Before the wasm builder we had wasm files for genesis checked in git and a bash script to compile them. People constantly forgot to recompile and were running into issues. Thus, I added the wasm builder. For sure it isn't perfect and there are often too many recompilations. However, there are no hacks and something failing or similar.

We extract the binaries on-demand for each job, which adds 1-2s to each test (on my machine, it will differ depending on hardware)

Why we extract them per job? Why isn't this done on startup?

@mrcnski
Copy link
Contributor Author

mrcnski commented May 17, 2023

Did you looked at it?

Yes, it doesn't help. There's a grand total of two sentences of documentation. And it's a nightly feature. What would help me is to know whether someone's used it before and can confirm it works, and isn't going to introduce more issues. It would save me a lot of time vs. experimenting with it myself.

But you only run the workers in secure mode? The node itself needs access to write to storage. You first write out the binary and then execute it in "secure mode". Not sure what else you mean.

We were planning to provide ways to run as a validator without secure mode. And for sure it needs to work on dev machines.

And the node needs write access but not exec access, see above comments about noexec.

Before the wasm builder we had wasm files for genesis checked in git and a bash script to compile them. People constantly forgot to recompile and were running into issues. Thus, I added the wasm builder. For sure it isn't perfect and there are often too many recompilations. However, there are no hacks and something failing or similar.

Thanks for the context, that helps explain why we're wary of multiple files. I didn't mean to be dismissive of wasm-builder, it works well for what it does. But I think we wouldn't need it if we built the whole node with musl, assuming that idea is still on the table.

Why we extract them per job? Why isn't this done on startup?

Each test is self-contained and has its own validation host.

@bkchr
Copy link
Member

bkchr commented May 17, 2023

What would help me is to know whether someone's used it before and can confirm it works, and isn't going to introduce more issues. It would save me a lot of time vs. experimenting with it myself.

Take a nightly compiler and let's go? The docs look rather easy. You import another crate, define the target, bin etc and then you can reference it using a env variable.

And the node needs write access but not exec access, see above comments about noexec.

Yes for sure, but giving it exec access to some limited folder sounds reasonable? I mean the node doesn't run any user code and thus, doesn't require total shut down.

But I think we wouldn't need it if we built the whole node with musl, assuming that idea is still on the table.

But that doesn't help with the points of compiling all the binaries.

Each test is self-contained and has its own validation host.

Ahh sorry, I thought by test you meant when you run the node on your machine doing some "tests" and not the actual tests.

@mrcnski
Copy link
Contributor Author

mrcnski commented May 17, 2023

Take a nightly compiler and let's go? The docs look rather easy. You import another crate, define the target, bin etc and then you can reference it using a env variable.

I'll try it if we decide to embed binaries after all. Nightly does mean the feature can technically be pulled if there are major issues with it, but it's very unlikely.

Yes for sure, but giving it exec access to some limited folder sounds reasonable? I mean the node doesn't run any user code and thus, doesn't require total shut down.

Yep. There were other issues mentioned but they can all be worked around. If we always did this we at least have the same code for Mac and Linux.

But that doesn't help with the points of compiling all the binaries.

If the binaries were subcrates wouldn't they inherit the same target? I don't know how to compile only the binary crates with musl - something like this forces the user to be on linux:

[profile.release]
target = "x86_64-unknown-linux-musl"

On the other hand, this isn't possible AFAIK:

[profile.release.x86_64-unknown-linux-musl]
target = "x86_64-unknown-linux-musl"

@bkchr
Copy link
Member

bkchr commented May 18, 2023

Nightly does mean the feature can technically be pulled if there are major issues with it, but it's very unlikely.

Nightly probably more means that there can be breaking changes. I mean it could still be pulled, but I would say it is very unlikely.

If the binaries were subcrates wouldn't they inherit the same target? I don't know how to compile only the binary crates with musl - something like this forces the user to be on linux:

I mean that isn't the problem we are speaking about. Currently when you want to work with Polkadot you only need to run cargo build --release, but then you would need to do cargo build --release -p polkadot && cargo build --release -p prepare-worker && cargo build --release -p validation-worker. And then I still omitted the target. I mean this is again nothing that adds a huge blocker, just a small change to the experience.

@mrcnski
Copy link
Contributor Author

mrcnski commented May 23, 2023

We can just add the workers to [[bin]] in Cargo.toml to solve that.

Here's a summary of the problems so far and my proposal at the end.

  1. My version of binary-builder works but has a bad UX with constant recompilations. Maybe it's fixable, I haven't spent too much time on it. The other issues with embedded workers I think we have resolved (unless new ones pop up). But my overall sense is that it's not the best long-term solution.
  2. artifact-dependencies looks cool but it doesn't help us compile the workers with musl. As mentioned above I don't think it is possible to force a particular libc using just cargo. So we'd still need to manually pass the --target flag. Also, artifact-dependencies requires us to force nightly again which would be unfortunate.

Proposal:

  1. Add the workers to [[bin]].
  2. Introduce a new script called e.g. install-polkadot that does the following:
    a. Determine the target (i.e. get the current target triple and replace the libc with musl).
    b. Run cargo install with the new target and --path .. People can configure cargo install, e.g. have it put the binaries wherever they want.
    c. (optional) Instead of a script in the repo with --path . it can alternatively be a standalone script using --git, but then we'd need some way for the script to update itself. If the script was a crate we could use self_replace. Adds complexity.
    d. Right now the workers are only needed for validators, but I propose having this script be the defacto method for all cases so we have a future-proof installer.
    e. In the future the script can also secure validators.
  3. For tests: no changes required. We can keep having the tests directly call the host code as they do now, to avoid complications where they rely on the binaries being built. In the embedded binary case I wanted to extract the binaries to test that code path, but the current proposal doesn't rely on any hairy code.

I think this solves all our problems in a simple and clean way but let me know if I missed something.

@Polkadot-Forum
Copy link

This pull request has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/ux-of-distributing-multiple-binaries-take-2/2854/27

@bkchr
Copy link
Member

bkchr commented May 23, 2023

2. artifact-dependencies looks cool but it doesn't help us compile the workers with musl. As mentioned above I don't think it is possible to force a particular libc using just cargo. So we'd still need to manually pass the --target flag.

target — The platform target to build the dependency for. This field can only be specified when artifact is specified.

The default if this is not specified depends on the dependency kind. For build dependencies, it will be built for the host target. For all other dependencies, it will be built for the same targets the declaring package is built for.

In the docs of the feature. This sounds exactly like what we need? Otherwise I would not have brought this up...

2. Introduce a new script called e.g. install-polkadot that does the following:

I said now multiple times that we need some guide and you come up with some script again. I don't get it... Writing these scripts is NOT YOUR JOB. You need to provide all the requirements etc in a clear and understandable way. So, that people who write these packages can follow this guide.

@bkchr
Copy link
Member

bkchr commented May 23, 2023

BTW, if we would bundle the workers in the polkadot binary. We could also add some CLI arg "export-workers" that export workers to the given path. This could be used in systemd scripts before starting the node binary to put the workers into some directory and the "lock down" the node.

@mrcnski
Copy link
Contributor Author

mrcnski commented May 23, 2023

I just forgot about the guide suggestion. My proposal for a guide would be:

a. Determine the target (i.e. get the current target triple and replace the libc with musl).
b. Run cargo install with the new target and --path .. You can configure cargo install, e.g. have it put the binaries wherever they want.

It's only a proposal though. At least we're not blocked on anything technical anymore, just on figuring out our exact requirements for the guide.

In the docs of the feature. This sounds exactly like what we need? Otherwise I would not have brought this up...

Indeed, I totally missed that there were more docs. 🤦‍♂️ I still think it's not ideal to rely on an unstable feature, but it's good to have this as a backup option.

BTW, if we would bundle the workers in the polkadot binary. We could also add some CLI arg "export-workers" that export workers to the given path. This could be used in systemd scripts before starting the node binary to put the workers into some directory and the "lock down" the node.

That sounds like a decent backup option to have. It doesn't sound like a simple UX though. I think we were pushing so hard for a single binary because of the apparent simplicity on the user side.

mrcnski added a commit to mrcnski/polkadot-wiki that referenced this pull request May 31, 2023
After extensive discussion (see
[here](paritytech/polkadot#7147) and
[here](https://forum.polkadot.network/t/ux-of-distributing-multiple-binaries-take-2/2854)),
it's been determined that we will be distributing multiple binaries along with
`polkadot`. I'm updating the Validator Guide to reflect this. To make it easier
for validators, this may be the basis for some script in the future if that is
desired.
@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable
Logs: https://gitlab.parity.io/parity/mirrors/polkadot/-/jobs/3260039

@mrcnski
Copy link
Contributor Author

mrcnski commented Aug 2, 2023

Closing in flavor of #7337

@mrcnski mrcnski closed this Aug 2, 2023
DrW3RK pushed a commit to w3f/polkadot-wiki that referenced this pull request Aug 11, 2023
…4860)

* Validator Guide: add instructions for installing multiple binaries

After extensive discussion (see
[here](paritytech/polkadot#7147) and
[here](https://forum.polkadot.network/t/ux-of-distributing-multiple-binaries-take-2/2854)),
it's been determined that we will be distributing multiple binaries along with
`polkadot`. I'm updating the Validator Guide to reflect this. To make it easier
for validators, this may be the basis for some script in the future if that is
desired.

* Add a helpful note

* Update verification instructions

* Update docs/maintain/maintain-guides-how-to-validate-polkadot.md

Co-authored-by: Will Pankiewicz <[email protected]>

---------

Co-authored-by: Will Pankiewicz <[email protected]>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review. A4-companion A PR that needs a companion PR to merge in parallel for one of its downstream dependencies. B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders. D3-trivial 🧸 PR contains trivial changes in a runtime directory that do not require an audit. E5-needs_cumulus_pr This is an issue that needs to be implemented upstream in Cumulus. T4-parachains_engineering This PR/Issue is related to Parachains performance, stability, maintenance.
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

PVF worker: separate worker binaries and build with musl
8 participants