Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use of pbr breaks cx_freeze applications #385

Closed
fabioz opened this issue Nov 21, 2016 · 25 comments
Closed

Use of pbr breaks cx_freeze applications #385

fabioz opened this issue Nov 21, 2016 · 25 comments

Comments

@fabioz
Copy link

fabioz commented Nov 21, 2016

The given code:

_v = VersionInfo('mock').semantic_version()
__version__ = _v.release_string()
version_info = _v.version_tuple()

is not really friendly to frozen applications... also, I'd say, it adds a lot of logic under the hood to just to the version (besides adding having a runtime requirement to setuptools, which is usually just a setup time requirement), so, I'd like to check how feasible it'd be to revert it to just coding the __version__ and version_info directly into the source code -- also, I'd say it makes it easier to know the current version by looking at the source code and makes the code clearer ;)

@nicoddemus
Copy link

I second that, I have the same problem: frozen applications (using PyInstaller or cx_Freeze for example) break with this error:

Exception: Versioning of this project requires either an sdist tarball, or access to an upstream git repository. Are you sure that git is installed?

Some alternatives to using setuptools at runtime:

Both can generate a _version.py or similar which can then be imported cleanly by the __init__.py file.

Some questions:

  1. Would a PR be accepted to change the current version scheme? I would be happy to supply one.
  2. Is there a workaround for the freezing problem described above?

@rbtcollins
Copy link
Member

So, the APIs to query installed app metadata are in principle compatible with freezing - and if they are fixed, then lots and lots of libraries will start working.

Why are you embedding mock - a testing library - into frozen apps anyway?

I'm not super keen on changing btw, but will consider it if someone does the work. Requirements:

  • semver compatible versions for all git revs
  • clean changelog like currently produced and integrated into docs
  • clean support for conditional requirements (like we're using today)

@rbtcollins
Copy link
Member

(Oh - and because I don't want someone doing all the work and me saying no, please at least get consensus here on the proposed direction before sinking a lot of time in)

@fabioz
Copy link
Author

fabioz commented Nov 29, 2016

A note on the "why": tests should always be run against frozen applications to make sure that the tests actually grab what would be distributed to the final user, not a developers dev environment (the test task makes the freeze, copies the tests to a proper place to be added to the PYTHONPATH and runs it with the freezed library and the proper tests without any other external references).

@nicoddemus
Copy link

So, the APIs to query installed app metadata are in principle compatible with freezing - and if they are fixed, then lots and lots of libraries will start working.

I'm not sure, here's the implementation of get_version, which generates the error above:

def get_version(package_name, pre_version=None):
    """Get the version of the project.

    First, try getting it from PKG-INFO or METADATA, if it exists. If it does,
    that means we're in a distribution tarball or that install has happened.
    Otherwise, if there is no PKG-INFO or METADATA file, pull the version
    from git.

    We do not support setup.py version sanity in git archive tarballs, nor do
    we support packagers directly sucking our git repo into theirs. We expect
    that a source tarball be made from our git repo - or that if someone wants
    to make a source tarball from a fork of our repo with additional tags in it
    that they understand and desire the results of doing that.

    :param pre_version: The version field from setup.cfg - if set then this
        version will be the next release.
    """
    version = os.environ.get(
        "PBR_VERSION",
        os.environ.get("OSLO_PACKAGE_VERSION", None))
    if version:
        return version
    version = _get_version_from_pkg_metadata(package_name)
    if version:
        return version
    version = _get_version_from_git(pre_version)
    # Handle http://bugs.python.org/issue11638
    # version will either be an empty unicode string or a valid
    # unicode version string, but either way it's unicode and needs to
    # be encoded.
    if sys.version_info[0] == 2:
        version = version.encode('utf-8')
    if version:
        return version
    raise Exception("Versioning for this project requires either an sdist"
                    " tarball, or access to an upstream git repository."
                    " Are you sure that git is installed?")

It tries these methods to find the version, in order:

  1. Environment variables PBR_VERSION and OSLO_PACKAGE_VERSION;
  2. _get_version_from_pkg_metadata: this looks for PKG-INFO and METADATA files in the same directory as the package, which are not available in the frozen application (all .py files are inside a library.zip file).
  3. _get_version_from_git: the frozen application is not on a git repository.

None of those methods work with a frozen executable.

Perhaps I'm missing something regarding how meta-data can be found in frozen applications?

Why are you embedding mock - a testing library - into frozen apps anyway?

We execute our tests using the frozen application, to ensure we are packaging everything correctly.

We do this by having passing a command-line option to our application (--pytest), which when present doesn't execute the actual application but passes control to pytest.main(). This of course means that we have to freeze pytest, plugins and mock itself, but in our experience it is invaluable to catch packaging problems.

This is the main problem I'm attempting to resolve: mock since version 1.1.0 uses pbr to define its version (e795a4a), and it doesn't seem to play well with frozen applications (as far as I can tell).

clean changelog like currently produced and integrated into docs

Oh this is generated automatically by pbr, didn't know that. None of the solutions I know handles this unfortunately.

@tadeu
Copy link

tadeu commented Nov 29, 2016

Just a side note without looking in too much detail: this issue is probably related to and could fix #383 and #314

@jaraco
Copy link

jaraco commented Nov 29, 2016

The main issue seems to be that cx_freeze isn't properly incorporating the package metadata. I suggest that issue should be resolved directly, or mock should provide a fallback behavior when the metadata isn't available.

I'd rather not that every package resort to bypassing the proper metadata channels in order to function in environments that fail to supply proper metadata. Mock could do this:

try:
    _v = VersionInfo('mock').semantic_version()
    __version__ = _v.release_string()
    version_info = _v.version_tuple()
except Exception:
    # supply fallback values for environments that break metadata, such as cx_freeze
    __version__ = 'unknown'
    version_info = 'unknown',

Or maybe this:

except Exception:
    # grab fallback values for environments that break metadata, such as cx_freeze
    from mock import version
    __version__ = version.__version__
    version_info = _parse_version_tuple(__version__)

This approach would plaster over the issue until proper support for the version metadata can be supported by cx_freeze, but would still rely on the proper metadata in environments that support it.

@nicoddemus
Copy link

nicoddemus commented Nov 29, 2016

The main issue seems to be that cx_freeze isn't properly incorporating the package metadata.

I understand, but I have the sentiment that even that won't be sufficient. pbr for example wouldn't work even if the metadata files were available in the zip file produced by cx_freeze as can be seen in the source code.

I didn't investigate other packages which deal with metadata if they support reading from zipfiles produced by freeze tools such as cx_freeze or PyInstaller.

I personally like setuptools_scm option to generate a simple .py file containing the hard-coded version at build time: users don't have to pay the runtime cost associated with reading/parsing meta-data information which really doesn't change for a given package installation. This topic is touched in a recent blog post by @fabioz using mock specifically as an example.

@rbtcollins
Copy link
Member

So, if absolutely no package consults the metadata, then yes, you can avoid the cost, but its a tragedy of the commons - there is no incentive to workaround it for any one package unless they are the 'one package' causing issues;

that said, I'm not sure why pbr is manually parsing the metadata file instead of using get_distribution, I'm willing to bet someone did an optimisation some time back; I don't think it was me :P. Point is though, that that fix is cheap - use the right API - and then fix cx_freeze, and whole classes of things will work properly that don't right now.

The example of 7.6 seconds to import is bad - its possible, if mock was involved, that its an install missing the metadata files - if so, its self inflicted:t he metadata is PEP described, its part of Python, and avoiding it just makes everyones life harder.

@nicoddemus
Copy link

So, if absolutely no package consults the metadata, then yes, you can avoid the cost, but its a tragedy of the commons - there is no incentive to workaround it for any one package unless they are the 'one package' causing issues;

Well in this case unfortunately it is the use of pbr in the mock package which is causing problems, so I think this is the right forum for discussion.

Point is though, that that fix is cheap - use the right API - and then fix cx_freeze, and whole classes of things will work properly that don't right now.

I agree that would be the right approach. I tested to see if including the metadata information in the zip file would fix this, but it seems things are still broken, for pbr and pkg_resources as well:

$ mkdir mock-dist
$ pip install -U mock --no-compile --target=dist
(zip the contents of the "dist" folder into "library.zip")
$ set PYTHONPATH=C:\Users\bruno\mock-dist\library.zip

The library.zip contains the meta-data, as can be seen in this gist.

Now testing mock:

$ python -c "import mock"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\bruno\mock-dist\library.zip\mock\__init__.py", line 2, in <module>
  File "C:\Users\bruno\mock-dist\library.zip\mock\mock.py", line 71, in <module>
  File "C:\Users\bruno\mock-dist\library.zip\pbr\version.py", line 460, in semantic_version
  File "C:\Users\bruno\mock-dist\library.zip\pbr\version.py", line 447, in _get_version_from_pkg_resources
  File "C:\Users\bruno\mock-dist\library.zip\pbr\packaging.py", line 725, in get_version
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. Are you sure that git is installed? 

That's expected, it is the same error produced by cx_freeze, but keep in mind that the library.zip file now contains the meta-data and even so it doesn't work.

I get a similar error using the pkg_resources API:

>>> import pkg_resources
>>> pkg_resources.get_distribution('mock')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "E:\Miniconda\lib\site-packages\pkg_resources\__init__.py", line 514, in get_distribution
    dist = get_provider(dist)
  File "E:\Miniconda\lib\site-packages\pkg_resources\__init__.py", line 394, in get_provider
    return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
  File "E:\Miniconda\lib\site-packages\pkg_resources\__init__.py", line 920, in require
    needed = self.resolve(parse_requirements(requirements))
  File "E:\Miniconda\lib\site-packages\pkg_resources\__init__.py", line 807, in resolve
    raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: mock

It seems the usual packaging tools don't work when you have a unique zipfile with several distributions inside it, so it is not just a matter of fixing cx_freeze. Or am I doing something wrong in my testing?

Btw, thanks everyone who has contributed to the discussion so far, I really appreciate it.

@jaraco
Copy link

jaraco commented Nov 30, 2016

Or am I doing something wrong in my testing?

No, you're right. pkg_resources knows how to load egg-info metadata from zipped eggs and dist-info metadata for packages in a file system, but not dist-info from a collection of zipped packages. This shortcoming is probably something that should be supported. I didn't mean to suggest that it was probably already supported. I suspect when the dist-info support was added, this use-case wasn't considered as supporting zipped packages is something pip has tried to avoid.

I personally like setuptools_scm option to generate a simple .py file containing the hard-coded version at build time: users don't have to pay the runtime cost associated with reading/parsing meta-data information which really doesn't change for a given package installation. This topic is touched in a recent blog post by @fabioz using mock specifically as an example.

It's a known defect of pkg_resources (though at the moment I can't find a specific ticket) that it performs a large amount of logic during import time. Indeed it shouldn't be an expensive operation to retrieve the metadata for a single, installed, specified distribution, but at the moment it is. The solution shouldn't be to additionally store the required metadata using a private convention and then load it from there. The solution should be to continue to refine the specs and implement the PEPs and get the tooling to perform well such that there's one obvious (and performant) way to retrieve metadata.

@nicoddemus
Copy link

This shortcoming is probably something that should be supported. I didn't mean to suggest that it was probably already supported.

Oh OK, I misinterpreted it then, thanks for the clarification and confirming the current state of things.

The solution shouldn't be to additionally store the required metadata using a private convention and then load it from there.

I didn't realize that we have a recommended convention for obtaining the installed version from within a package (pkg_resources.get_distribution(), correct?) and in my experience very few packages actually follow it (actually I can't remember right now of any that does).

But I certainly agree that we as a community would benefit from a clear, efficient and well-defined standard than having each package adopt its own solution.

You mention "the specs and implement the PEPs", do you have any pointers to them handy? PEP-376 would be an example of such PEP?

@fabioz
Copy link
Author

fabioz commented Nov 30, 2016 via email

@jaraco
Copy link

jaraco commented Dec 4, 2016

a recommended convention for obtaining the installed version from within a package (pkg_resources.get_distribution(), correct?)

I also didn't realize that get_distribution() works this way. I've always used require()[0], but looking at the code, get_distribution() looks preferable (and will resolve to require()[0] when appropriate). So yes, I'd say that's a recommended and supported convention.

You mention "the specs and implement the PEPs", do you have any pointers to them handy? PEP-376 would be an example of such PEP?

Yes, PEP 376 is a good start, though I see it was relying on Distutils 2, which never came to fruition. So pkg_resources.get_distribution is probably the best approximation for that functionality. To the extent that the specs and PEPs don't provide a clear specification, I believe the intention is still there - that the recommended place for metadata (including version number) to be defined is in the package metadata.

it's sensible enough to just ... set​ [the version] at release time and not ​have to calculate it ​at import time

I'm not suggesting that the version be calculated at import time, just resolved (linked) to the same information in the package metadata. There are some real, practical challenges with simply copying that value to a file at build time (release time).

The first issue is that the value gets stored twice (rather than stored once and referenced), leading to potential for inconsistency.

The second issue is that it's not obvious where such a file could be reliably generated across all packages, especially when you consider namespace packages. Consider the packages pmxbot and pmxbot.rss - where would a build tool inject the versions for those packages? Should all modules in a package get a version, or only top-level modules/packages? And what defines a top-level package.

A third issue is that of source control. You say, "when I'm looking at source code I expect to see the version for it there," but this expectation can't be met in general. If you have a version there when the code is unreleased, that version will be incorrect for all commits except the released ones. Additionally, if that file is modified as part of a release process (to inject the version being released), that causes files under source control to be modified, requiring additional consideration for development environments.

A fourth issue is that many projects use SCM tags to designate releases in the source code, which is an additional place that the version numbers need to be indicated. Tools like setuptools_scm attempt to unify those versions as well, allowing the version number during a release to be designated in exactly one place and thereafter derived for the package metadata and imported packages.

A fifth issue is that packaging tools like setuptools allow adding tags to a version at build time. For example, setuptools adds a .post0-{date} to non-release build versions in order to distinguish them from official releases. Any such tags would additionally need to be injected into any files in the package.

So while it may seem like an over-complication to define the version in metadata and resolve it through an API, there are some real and practical reasons for doing so, addressing limitations of a simple version file for simple packages.

Most importantly, the principle of not repeating one's self and linking values rather than copying them, coupled with the fact that the version number must be advertised in package metadata, means that the package metadata is where the version number is best indicated.

It seems to me there are two issues at play here. pkg_resources is slow to import (pypa/setuptools#510) and pkg_resources isn't readily available in stdlib, which has an additional limitation that it's bundled with setuptools and can't be installed separately. Does that sound about right?

@fabioz
Copy link
Author

fabioz commented Dec 6, 2016

Hi jaraco, thank you for your follow up...

The first issue is that the value gets stored twice (rather than stored once and referenced), leading to potential for inconsistency.

I think this depends largely on the approach chosen... you can still devise a way to have it only at a single place (either from scm or on a version somewhere).

For a reference, requests just reads the version from the source (https://github.com/kennethreitz/requests/blob/master/setup.py), which seems a sensible approach to have it only defined at a single place. Another approach could be getting it from SCM and storing it at release time, where the unique place would be the SCM (and it'd remain undefined until release time).

The second issue is that it's not obvious where such a file could be reliably generated across all packages, especially when you consider namespace packages. Consider the packages pmxbot and pmxbot.rss - where would a build tool inject the versions for those packages? Should all modules in a package get a version, or only top-level modules/packages? And what defines a top-level package.

Well, for namespace packages, I think that each package inside the root namespace is a completely different beast with its own version, so, I don't see any issues there (personally, I think that namespace packages go a bit against the very nature of Python, but if you must, then every package inside it should be completely independent anyways and should have its own version -- and you have those same issues with any approach chosen, probably even worse as fetching the metadata for those cases makes things even less straightforward).

A third issue is that of source control. You say, "when I'm looking at source code I expect to see the version for it there," but this expectation can't be met in general. If you have a version there when the code is unreleased, that version will be incorrect for all commits except the released ones. Additionally, if that file is modified as part of a release process (to inject the version being released), that causes files under source control to be modified, requiring additional consideration for development environments.

I'd like to politely disagree when in most projects I can just go to github and see it.

I.e.:
https://github.com/django/django/blob/master/django/__init__.py
https://github.com/kennethreitz/requests/blob/master/requests/__init__.py
https://github.com/simplejson/simplejson/blob/master/simplejson/__init__.py

Also, if you have the version derived from SCM, then you'll only know the version at release time (which is a different approach -- and at this case, there are also solutions, such as having a separate module generated for the version -- and have it at .gitignore and __version__ would get that one leaving __version__ undefined if not available).

A fourth issue is that many projects use SCM tags to designate releases in the source code, which is an additional place that the version numbers need to be indicated. Tools like setuptools_scm attempt to unify those versions as well, allowing the version number during a release to be designated in exactly one place and thereafter derived for the package metadata and imported packages.

I think that if this is the approach chosen, then at build time it seems it'd be straightforward to update the module to have the proper version -- and then, by definition a version would only be valid at release, trying to put on a version from git doesn't seem consistent -- i.e.: if the user gets a tarball from a branch, what's the version? Why is it different if checked out using git? I'd say that in this case, the version should always be undefined for consistency.

A fifth issue is that packaging tools like setuptools allow adding tags to a version at build time. For example, setuptools adds a .post0-{date} to non-release build versions in order to distinguish them from official releases. Any such tags would additionally need to be injected into any files in the package.

This seems fair to me (i.e.: not an issue: you can do whatever you wish at release time, as long as it's properly installed later on).

So while it may seem like an over-complication to define the version in metadata and resolve it through an API, there are some real and practical reasons for doing so, addressing limitations of a simple version file for simple packages.

Most importantly, the principle of not repeating one's self and linking values rather than copying them, coupled with the fact that the version number must be advertised in package metadata, means that the package metadata is where the version number is best indicated.

It seems to me there are two issues at play here. pkg_resources is slow to import (pypa/setuptools#510) and pkg_resources isn't readily available in stdlib, which has an additional limitation that it's bundled with setuptools and can't be installed separately. Does that sound about right?

Not really. The major one reported (i.e.: that it doesn't work properly with cx_freeze nor any other "freeze" version) is still an issue.

Also, think those are nice tools, but having them to set the __version__ of the module at import time to be very problematic on a number of circumnstances... so, I still think a simpler approach would be better and it covers all the use-cases you're proposing -- i.e.: having a __version__ set either in the file or generated at release time, which are approaches which won't add a dependency on setuptools nor pkg_resources to start with.

@jaraco
Copy link

jaraco commented Dec 7, 2016

In all of these cases, it's possible to mistakenly mis-label a release. Consider requests:

$ python -m rwt git+https://github.com/kennethreitz/requests/@7d2dfa8
Collecting git+https://github.com/kennethreitz/requests/@7d2dfa8
  Cloning https://github.com/kennethreitz/requests/ (to 7d2dfa8) to /private/var/folders/c6/v7hnmq453xb6p2dbz1gqc6rr0000gn/T/pip-414vdyqp-build
  Could not find a tag or branch '7d2dfa8', assuming commit.
Installing collected packages: requests
  Running setup.py install for requests ... done
Successfully installed requests-2.12.2
Python 3.6.0b4 (v3.6.0b4:18496abdb3d5, Nov 21 2016, 20:44:47) 
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> requests.__version__
'2.12.2'

In this case, you've installed code that's indistinguishable in behavior from 2.12.3, but is indicated as version 2.12.2.

I'd like to politely disagree when in most projects I can just go to github and see it.

The issue I see is that if you browse to a particular commit, the version indicated is the same version as many adjacent commits, only one of which might be the official release of that version (or that version may never have been released). The version in that file is more often incorrect and misleading that correct. It does, admittedly, give an approximation for the proper version.

Thinking about tags, consider if one accidentally or maliciously tagged a version that differs from the version number in the file system. I've seen this happen on dozens of occasions... because it requires two separate manual steps to be correctly executed in order, leading to tools like bumpversion, but even those are subject to operator error. I personally seek a solution that minimizes repetition and potential for operator error.

Still, I respect that other projects might prefer to do more of these steps manually and manage the consistency through convention. I'm okay with that, though I still struggle to think of a recommendation one could make that works in the general case.

As I think more about the issue with namespace packages, I realized the issue is less about the use of namespace packages but an issue with the disparity between a distribution's version and a Python package's version. requests.__version__ is reflecting the version of the requests distribution, but that approach only works when there's a clear and one-to-one mapping between distribution and Python package. Consider the pytest-runner project. That project is released with a distribution "pytest-runner==2.10.1" which only installs a single, top-level ptr module. Or the setuptools distribution, where that single distribution exposes two packages (setuptools and pkg_resources), the latter of which wasn't even a package until fairly recently.

I admire allowing for (and recommending) the developer to derive a version number (such as setuptools.__version__) or not (such as with pkg_resources or ptr or pmxbot.rss that don't advertise a version for that package or module), while still giving distribution management systems the ability to manage the distributions by version and not imposing unnecessary constraints on the structure of a given distribution. It's also more consistent with the packages and modules found in the stdlib, some of which expose __version__ and others which don't.

So I guess to summarize, while I welcome projects to expose __version__ however is appropriate for that project, I don't think that convention can possibly replace the general need for an API to expose the versions of distributions (such as used here or one of the thousands of other instances of this API).

If the maintainers of mock want to stop using this API and maintain the version another way, that's fine, but that approach is only an incomplete workaround leaving the underlying cause (inability to get distribution versions in cx_freeze applications) unaddressed.

@nicoddemus
Copy link

nicoddemus commented Dec 7, 2016

@jaraco what's your opinion on the approach taken by setuptools_scm, which has the option to derive the version at build time and write a small Python file with that information, which is then exposed by the package via simple import?

In my point of view, this has the benefits of automatically deriving the version number from tags while having no extra run-time overhead to provide the version when installed.

If the maintainers of mock want to stop using this API and maintain the version another way, that's fine, but that approach is only an incomplete workaround leaving the underlying cause (inability to get distribution versions in cx_freeze applications) unaddressed.

Indeed it is mostly a workaround, but is it common to query other information from the distribution (such as installed files)?

I think it boils down to the fact that alas the version of a package/module is defined traditionally in a __version__ variable as opposed to a function like get_version(). The former must be resolved at import time, even if the user of the module never really checks the actual version.

jaraco added a commit to pypa/setuptools that referenced this issue Dec 7, 2016
@jaraco
Copy link

jaraco commented Dec 7, 2016

what's your opinion on the approach taken by setuptools_scm?

I'm generally in favor (+1). I don't use the file-writing feature of setuptools_scm, but if that works for you, I think that approach alleviates a number of my concerns.

The former must be resolved at import time, even if the user of the module never really checks the actual version.

It must be resolved to something, which could perhaps fallback to a constant value like 'unknown' in cases where get_version() fails.

@nicoddemus
Copy link

I'm generally in favor (+1). I don't use the file-writing feature of setuptools_scm, but if that works for you, I think that approach alleviates a number of my concerns.

I see, thanks. Using setuptools_scm was my initial attempt to contribute to mock to solve this problem, but unfortunately I hit the problem of the version_info public variable and CHANGELOG generation mentioned at the beginning of the issue.

It must be resolved to something, which could perhaps fallback to a constant value like 'unknown' in cases where get_version() fails.

Sorry, I meant to use the current methods in a function, like this:

def get_version():
    import pkg_resources
    try:
        return pkg_resources.get_distribution('setuptools').version
    except Exception:
        return 'unknown'

And then users can get the version by calling setuptools.get_version() instead of setuptools.__version__.

@jaraco
Copy link

jaraco commented Dec 9, 2016

leaving the underlying cause (inability to get distribution versions in cx_freeze applications) unaddressed.

I thought I would file a ticket with cx_freeze to capture this need and recommend (at least at a high level) a path to a solution.

@nicoddemus
Copy link

FWIW there's #362 which reverts the use of pbr back in favor of a manual versioning scheme.

@rbtcollins
Copy link
Member

#362 was actually about providing a stub ChangeLog as a convenience to some folk, which I'm fine with, but the PR had an additional commit added adding in manual versioning, which I'm not.

@nicoddemus
Copy link

@rbtcollins oh OK, thanks for the clarification.

@rbtcollins
Copy link
Member

CX freeze ticket is now at marcelotduarte/cx_Freeze#216

@rbtcollins
Copy link
Member

I'm closing this now, since we seem to all be of the understanding that cx_freeze is incompatible with some base packaging PEPs, and the onus is on that project to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants