-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abort immediately when there's a build failure, instead of backtracking #10655
Comments
To elaborate on this: here's a thought experiment -- a Cython-based project |
This could be good default behavior. I have experimented this issue in the past -- and yes, it can be really painful. I want to suggest some ideas:
|
I mean, I think we'd need to have this go through a pass-a-flag-to-opt-out period. As for the documentation, I think we should be able to get away with a clear-enough error message. With the progress I've made on #10421 so far, I'm pretty confident that we should be able to get most of the way there with just the error messages. |
But doesn't that mean that if package X has wheels for your platform at version 1.0, and they release 2.0, initially without wheels because building on your platform is a PITA (ahem Windows), then rather than getting the 1.0 wheel you'd start getting failures? That seems like a rather bad regression... |
Yea... the "nice" thing about that though, is that it is an immediate failure where you have a clear actionable step, rather than a long-winded one, or something that subtly hides issues (eg: newest version of $package added a C dependency that you don't have, and you're quietly getting the older version even though you didn't intend to). It's a much more explicit failure model (and matches what we had prior to the backtracking resolver, in a good way). |
I'm still confused. At the moment, using the X example I gave above, if I say Maybe that works if you're talking about a dependency somewhere deep in a project's requirements, but it seems to me that it sucks for requirements specified directly on the command line. And yes, IMO it was bad when this happened with the old resolver, too. |
No. I don't think it's "better". I agree that it's still suboptimal. I also don't think we have any "globally optimal" approach here. To try and rephrase what I said earlier -- we're operating in tradeoffs here, and I think having an immediate failures in all of these cases (both the example you've picked as well as the ones I've provided) is better than having some of them work with backtracking while also having long-winded "resource" intensive failures in other cases. |
OK, I get your point now. However, I don't think either solution is ideal, and given that's the case, I would prefer to stick with what we have for now, until we work out a genuinely better solution. I do think there's a better solution waiting for someone to discover it, so I'm optimistic that "explore options" is not going to be wasted time.
Maybe we should make Or maybe we only ever do a single sdist build, and if it fails we switch to As you see, there are lots of plausible (at least plausible to me 🙂) approaches to explore, so if this issue is "Explore alternatives to repeatedly trying to build older and older sdists" then I'm on board, but if it's specifically "Is abort immediately the approach we should use" then my answer is "no, I'd prefer something else". One fundamental problem here is that we have no way of knowing how likely it is that building a sdist will work. So the best we can do is work on assumptions and heuristics:
Whatever we do here, we need to be very careful how we report what pip is doing to the user. "No suitable versions found" is a bad message when the user can see there's a valid wheel for version N-1. Or when there's two sdists and the user knows the older one built, but there's a compiler-dependent bug in the new one. We also need to tell the user how to get pip to skip a sdist that won't compile, if we do go with "stop on a build failure" - and I'm not at all clear how we'd do that ("X != 2.0 unless there's a wheel" isn't a valid requirement 🙁). ¹ Of course the underlying problem here is that when people say they want "foo", they probably don't actually mean that - and a 10-year old version of foo wasn't what they were after. Which is just another way of saying "if you lie to the computer, it will get you"... But punishing people for assuming that the computer will behave reaonably isn't a good answer, either. |
Also, do we have to have a "one size fits all" solution here? Maybe keep searching (or build once, or prefer binary) if it's a top-level requirement, but abort if it's a dependency somewhere down the tree? That has its own problems but might be a viable compromise. |
One of the problems with prefer-binary is that we'd prefer to get an older version even if a newer version would build and work successfully. :) |
I promise I've read the entire post, and... the thing I wanna push back against is the first sentence. Why is it not better to have eager failures, at a point where we can provide clear information about what the failure is?
I understand that this means that We're operating in trade-offs, and I think we do have a better-than-status-quo answer here -- it isn't perfect, but it's certainly better IMO. |
Why is it not better to have a success rather than a failure? I'm not arguing that the status quo is good. Just that we shou;ld look for a better solution than just failing at the first problem. I get that prefer-binary may be too big of a change. But what's wrong with trying to find a binary that works after a build failure, and only erroring if that doesn't work? "Cannot build foo X.Y, trying to find older wheels that work... none found" now they have clear actionable information and an assurance that they need to make the build work if they want to install this package. I think we're going to have to disagree here, let's see what others have to say. |
I don't think it should be the responsibility of pip to try and protect the user from possible failure, when the user has chosen not to pin to a specific version or range. By running As an end user, I would much rather get a failure (if I've made the trade-off not to pin/specify a version range), so I know to either (1) start pinning, (2) install a new build dep, (3) report a potential bug to the project. |
Isn't this the same discussion that's here #9764 ? Wasn't this the behavior when the "new" resolver was implemented and it was changed because of overwhelming user complaints? |
I think you're right. I consider we can restore this behavior, and add a permanent flag to use backtracking instead of failing. Or we can just implement a well-explained error message, as @pradyunsg proposed. In both cases, the user complaints should be satisfied. |
#9264 seems to be where we introduced this behaviour. I find it amusing that this was a concerned I raised then as well, and that both Paul and I are consistent humans. :) As I said, we're operating in tradeoffs, and what @pfmoore thinks is important doesn't match with what I think is more important, which is why we disagree! One way we can go about this is gathering user data -- I don't think we'd had concrete user feedback that status quo is the "better" behaviour, or whether what I'm pushing for here would be "better", or there's something else we can do entirely! I think we can do a round of "ask a representative set of users" with a survey question, that we're happy with that, can get us closer to answering this! If this sounds reasonable, I can go about drafting a survey. FWIW, I've not looked carefully but "lemme look for compatible wheels" seems... complicated to implement; even if it's viable. For that to work though, at the point where we're failing, we've already got a pinned link/editable candidate and will need to have somehow get direct access to the candidates and format control. |
lol I'd forgotten that earlier discussion. Agreed we really need more concrete information. I know that we've had complaints about pip backtracking through all versions of a package, so that lends support to @pradyunsg's argument that doing so is bad. So I'm willing to accept that blindly trying all sdists is not the right approach, even if it is technically what the user asked for. There's also #10617 to consider here. @pradyunsg's proposal would have done the right thing for py2exe users on Python 3.10 in that case. I'm starting to come around to thinking that this idea might be OK. But I'm still very bothered about explaining it:
Even though we say in the docs "the pip team cannot provide support for individual dependency conflict errors", we still get lots of bug reports asking for exactly that. And the support burden of reviewing (and maybe answering, in spite of what we say in the docs) those issues is non-trivial. The issue "pip tried to build hundreds of versions of my package" might be unwelcome for the user, but it's far easier to diagnose and respond to than "pip tried to build the wrong version of my package" (I bet that sometimes, the user won't even mention that the build failed). Certainly, choosing the worse behaviour because it's easier to support is a bad path to take, but is a better behaviour which results in maintainers deciding to take a hard line on "it's not a pip bug, resolve your own dependency conflict error", really better in the long run?
Looking at #9264 and the current source, if link in self._build_failures:
# We already tried this candidate before, and it does not build.
# Don't bother trying again.
return None It shouldn't be impossible to extend that to record the name of the package and also skip any other sdists for that package without trying the build. But given what I've said above, I'm no longer going to fight for a more nuanced solution than "abort immediately on build failure". I just wanted to point out that it's possible to do this. Ultimately, though, this all comes down to the fact that we don't have reliable static metadata for sdists. Getting PEP 643 ("Metadata for Package Source Distributions") implemented, at least in setuptools, would be a huge step forward here. Although it would do nothing for older sdists, we could categorise them as "legacy" at which point a degraded experience is much more justifiable. |
FWIW, I don't know if you've noticed: one of the things I'm doing in #10421 is adding notes like "this is an issue with the package, and not pip" (eg: #10421 (comment), https://github.com/pypa/pip/pull/10538/files#diff-a0b86a9499746602572aca9eef86205c4bd5acedf66f936ad60f4cf14f1f2d38R125). This is borrowed from/inspired by npm, and meant to guide users to direct users to investigate why I think that combined with clarity about what point in the build process the failure happened, should help make it easier for users to understand failures. This doesn't mean that we'd somehow solve bad errors coming out of other tools (or even all instances of that in pip), but I do think clearer errors coming out of package build failures would go a long way here. |
Linking in #10719 for a slightly different situation, where the build “succeeds” but produces a bad result. |
This can also easily fill all the remaining disk space. When I attempted |
So... It looks like the code change for this will be easier than testing this would be. :) |
Seems I spammed #10722 and should of put my input here. Beyond what has already been said by @pfmoore I would just like to add that in my experience users often don't understand constraints files, and therefore if building the metadata fails for a specific package as well as telling them that the package (and not pip) failed to build on version N and they should check if they have the right perquisites for that package, pip should probably also tell them that if they want to skip downloading version N they will need to add an addition to the constraints file (not the requirements file [in fact it's still confusing to me why top level requirements don't act as constraints]). In a similar fashion my experience is most package owners don't know about yanking and the effect it has on pip (probably because pip ignored yanked status until 20.3.1), so it would be also helpful to mention the package could be yanked by the owner if it's problematic. |
Fair, but this will not be in the error message. This might make sense in the supporting documentation tho.
Agreed, though again, this belongs in the documentation and we shouldn't be printing a wall of text.
What does this mean? What's confusing about them? $ pip install requests!=2.26.0 # this works, 2.26.0 is the latest version as of writing
$ pip install requests requests!=2.26.0 # this also works You can absolutely prevent a version from being used by using a requirements file alone; without a constraints file. |
There are scenarios where adding Which makes sense though? There must be some reason have a separate "constraints" rather than taking it from the users top level requirements? But as a user who has read through the pip docs and contributed code I still find it confusing. |
If I have a package that requires build dependencies Agree with the previous comment that maybe the focus should be more on how to avoid very expensive exploration than heuristics for early termination. |
Correct, that's the proposal here. And under the proposed solution, you'd be expected to explicitly say But to put this in context, the expectation is that a situation like this would be very rare. If you have a real-world example suggesting that it might be commoner than people are assuming, that would be useful. But I don't know whether a theoretical example will change many minds here 🙁 |
Another way to phrase that situation is: you have a constraint which affects what packages can be used but haven't specified it. Instead of having the dependency resolver stumble upon an answer by chance / find the solution in a suboptimal manner, the proposed behaviour here will force you to specify the additional build constraints external to the metadata yourself. This makes the resolution quicker, installations more easy to reason about and stops masking build-time issues.
I have one example, though it definitely doesn't suggest that it would be commoner: try installing something like numpy on a CentOS 6 VM (manylinux2010?). The newer releases no longer work (require a newer compiler version than the CentOS image has) but an older release that can be compiled using the available compiler will work or because a compatible wheel was uploaded. Now, I think it's better to get a failure in 20 seconds stating the compiler error; instead of a working install after 20 minutes of walls of red -- especially since this situation is basically indistinguishable for pip compared to "you don't even have a compiler installed" or "this package isn't supported on Windows" -- and in both those cases, we don't have any benefits from backtracking. |
Picking your example: Imagine that the package is a security related project where you do want to stick to the latest version. Of course, this whole scenario requires source builds so... Let's assume that is required here. Either because of no-binary or because the package author thinks that bundling underlying system dependencies those into a wheel (effectively pinning the version) isn't a good idea.1 In this situation, the current behaviour silently gives you an older version of the package. You have no way to affect that or detect that. There's no way to easy check that this is happening, other than pip's output, which has no formatting guarantees. There's workarounds/ways around this, which move you from reactive to proactive (which, idk, may be good or bad) : compare the version post-install by having a separate step for validation, or proactively monitor releases with pins. In this case, I think it's better to know that your configuration is no longer supported by the latest release (as will probably be evident from the failure output from Footnotes
|
(I know we're just going round the same loop again, but...) given that this is quite specific to the case you describe, would it not make more sense to have the behaviour be opt-in? So you do |
For me, this comes down to:
[*] So the exception here is of course when a package defines a minimum supported Python version via
|
Another scenario where backtracking is undesirable: Imagine a package where the build process has a step that is fallible and can fail intermittently (eg downloading an asset or build tools during package build time). When backtracking in the case of build failures is permitted, a |
I'd class that as "another scenario where unreliable builds are undesirable" :-) More seriously, I don't think there's any way we can reasonably identify why a build might have failed, so we have to look at the question from the point of view of what's the "right" behaviour if a build fails, and pip has no other information available as to why the build failed (even if the user might, from reading the build output). The proposal here is that we basically assume that every other version of the package will also fail to build, and furthermore that any wheels available for older versions of the package are unacceptable. I feel that's too extreme. Others feel that it's better in many cases, and making the user explicitly work around the assumption in the cases where it's incorrect is an acceptable price to pay. We don't yet have consensus, but I feel that the general opinion is against me, and I don't feel strongly enough to argue for my position (although I do feel strongly enough to keep wanting to make sure that people understand what they are proposing 😉) |
Yeah my point is more that pip cannot assume that just since the build of a package failed, that it might not succeed on an immediate retry of the same version. That is, it's not necessarily a correct assumption to move on and try other versions. Just like there is no guarantee that older versions will not also all fail too. As such, it seems there are just too many unknowns in the "the build failed" case, and so pip shouldn't try and intervene. |
Of course it's specific -- I was responding to a specific example. What I did here was take an example of the situation that @jbylund put forward, and made the case that there's good reason for someone to want different behaviour where backtracking makes it infeasible to get the behaviour they'd need unless they do a bunch of extra work. Yes, a flag would be possible for this situation if that were the only situation where it made sense to not backtrack. It's not though.
Hmm... Interesting! As the person who put this proposal forward: the proposal here is that "Errors should not pass silently" and backtracking on a build failure (specifically, a metadata generation failure or a wheel generation failure) is exactly that. We don't have any way to know why a package failed to generate metadata/wheel, so we should not assume that an older version could work (which is the operative assumption when backtracking). Yes, there are situations where backtracking does get a result. However, it is (a) not sufficiently common IME (b) requires doing work that will necessarily fail in all other scenarios, resulting in a significantly degraded performance/behaviour in those cases (c) silently hides issues that a user might want to be informed about and (d) makes it impossible to account for certain workflows/use cases. We have no guarantee that an older version will work. Even in cases where it would, I've argued that it is better to fail early and have the user figure that out themselves. It will provide all users a shorter feedback loop -- thing fails, Google it, realise you need a missing library/different older version etc; fix that and move on. Compare that to "why is pip backtracking for 20 minutes and giving me an older version of pandas" + "why is pip backtracking for 3 hours and still failing to install numpy?" When you don't know why it failed, failing eagerly means that you have a single short failure that you can share with someone who can help you; instead of a 10000+ lines of output where the initial lines are cut out because of limited terminal scrollabck and the final failure is on version 0.0.1 from 2006. Yes, this means we have that single class of situations that "just work" today will stop working. Yes, we require users to provide additional information (in the form of additional restrictions), to account for things that pip can't know. Yes, this is more work for the user than having the tool just do it for them. This is in exchange for so many other situations getting a better situation though. At the end of the day, it's a balance of tradeoffs as we can perceive them today. We might all change our minds and realise the current behaviour is actually better once we get more feedback or inputs. That's fine too! Right now though, I strongly feel that we should lean toward failing eagerly instead of trying really hard to get things done. We'd degrade on fairly specific scenario (I actually can only some older version but I didn't tell pip about this) while accommodating for many more scenarios (eager detection of packaging changes in a new release, less surprising backtracking behaviours, failures don't pass through silently, no backtracking forever when you're missing libfoo, no backtracking forever when you're on the wrong Python version etc). I'm tired of this looping discussion now, and I'm not going to be looking at this until next year now. Footnotes
|
Agreed, we're getting nowhere. In the interests of getting a resolution, I'll note that I don't have any personal need for any particular behaviour here, so I'll withdraw my objection to this PR. If, after it's implemented, I find that it causes me actual problems, I'll whine incessantly that "I told you so" 🙂 Feel free to ignore me if I do. |
Is this going to be part of 22.0? I think given objection has been withdrawn it makes sense. I do think it's worth keeping an eye on pip issues for any regressions and having a standard explanation on how to handle these issues (install dependencies based on install error, else use constraints file to prevent certain versions from installing). |
My 2 cents: I'm wondering how much of this family of problems would be alleviated by
|
There's an existing PR for this: #10258 Though if this landed this particular situation would never start backtracking so it's not as relevant. |
I might end up having time to tackle this before the release. There's a decent chance that this misses the 22.0 release window though, since my time is fairly limited. |
If you try to install a package that does not have wheels for your platform and you don't have the relevant build dependencies for it, pip will backtrack until it finds a version that it can build OR it has exhausted all the available versions of that package.
This can be especially painful when you're missing the build dependencies (in case of https://discuss.python.org/t/pip-without-setuptools-could-the-experience-be-improved/11810, it is setuptools) but in other cases, it can be a compiler / C library etc.
Would it make sense to fail immediately, instead of backtracking in these cases?
The text was updated successfully, but these errors were encountered: