See this by the author of Rye:
https://lucumr.pocoo.org/2024/8/21/harvest-season/
"Unified" packaging:
But like you said, poetry is working so well so I'll wait a little bit longer before jumping the ship.
I am using uv and it seems great.
I don't understand the difference between using "python -m venv .venv; source .venv/bin/activate" and creating a venv with uv and then running the same source command. What does uv offer/integrate that's not present already in python venv module?
Is there a helper to merge venv1 and venv2, or create venv2 which uses venv1 dependencies and on load both are merged?
In that perspective "merging" them directly defeats the purpose. What is needed is a better library ecosystem.
An important reason for using them is to test deployment: if your code works in a venv that only has specific things installed (not just some default "sandbox" where you install everything), then you can be sure that you didn't forget to list a dependency in the metadata. Plus you can test with ranges of versions of your dependencies and confirm which ones your library will work with.
They're also convenient for making neat, isolated installations for applications. Pipx wraps pip and venv to do this, and as I understand it there's similarly uvx for uv. This is largely about making sure you avoid "interference", but also about pinning specific versions of dependencies. It also lowers the bar somewhat for your users: they still need to have a basic idea of what Python is and know that they have a suitable version of Python installed, but you can tell them to install Pipx if they don't have it and run a single install command, instead of having to wrestle with both pip and venv and then also know how to access the venv's console scripts that might not have been put on PATH.
More importantly, at runtime you can only have one version of a given package, because the imports are resolved at runtime. Pip won't put multiple versions of the same library into the same environment normally; you can possibly force it to (or more likely, explicitly do it yourself) but then everyone that wants that library will find whichever version gets `import`ed and cached first, which will generally be whichever one is found first on `sys.path` when the first `import` statement is reached at runtime. (Yes, the problem is the same if you only have one venv in the first place, in the sense that your dependency graph could be unsolvable. But naively merging venvs could mean not noticing the problem until, at runtime, something tries to import something else, gets an incompatible version, and fails.)
uv supports groups and can create venv with the desired group set https://docs.astral.sh/uv/concepts/dependencies/#development...
For example, there can be "dev" group that includes "test", "mkdocs", "nuitka" groups (nuitka wants to be run with venv it builds binary for, so to keep venv minimal, it is in a separate group)
Perhaps it still creates a copy of the package files in the virtual environment, thus the cache only saves repeated downloads and not local disk space. If that's the case then this does look really useful.
It doesn't need to work like that. A lot of libraries would work fine directly from the wheel, because they're essentially renamed zip files and Python knows how to import from the archive contents directly. But this doesn't work if the package is supposed to come with any mutable data, nor if it tries to use ordinary file I/O for its immutable data (you're supposed to use a standard library helper for that, but awareness is poor and it's a hassle anyway). Long ago, the "egg" format expected you to set a "zip-safe" flag (via Setuptools, back when it was your actual packaging tool rather than just a behind-the-scenes helper wrapped in multiple backwards-compatibility layers) so that installers could choose to leave the archive zipped. But I don't know that Pip ever actually used that information, and it was easy to get wrong.
But more importantly, the contents of the virtual environment could be referenced from a cache that contained actual wheels (packed or unpacked) by hard links, `.pth` files (with a slight performance hit) or symlinks (I'm pretty sure; haven't tested).
Github Repo: https://github.com/lmstudio-ai/venvstacks
Blog post: https://lmstudio.ai/blog/venvstacks
And yes, everything is already solved with: due diligence (non-existent in scientific community) and nix.
In all seriousness, it is a bit tiresome when you come back to the python world and see a completely fragmented ecosystem (poetry, pdm, pip-tools, uv, and the older ways).
My personal view is that there "one standard tool" makes sense for users (i.e. people who will install applications, install libraries in order to create their own personal projects, etc.) but not for developers (i.e. people who will publish their code, or need to care about rigorous testing, published documentation etc.). The latter require the users' install tool, plus specific-scoped tools according to their personal preferences and their personal conception of what the problems are in "development" that need solving. That is, they should be able to build their own toolchain, although they might start with a standard recommendation.
Pip's scope has already crept in the sense that it exposes a `wheel` subcommand. It needs to be able to build wheels in order to install from sdists, but exposing this functionality means providing... half of a proper build frontend. (PyPA already offers a complete, simple, proper build frontend, called `build`; if `pip wheel` is part of your deployment process, you should probably be using `build` instead.)
(This is the Doom equivalent joke for Python environments!)
Excellent advice. Add fiona and shapely for manipulating vector data, and pyproj for projections and coordinate systems. Yes there are corner cases where installing 'real' GDAL is still needed, but for most common use cases you can avoid it.
Poetry works fine and solves dependencies and peer dependencies. But I guess the javascript-style churn of 'every week a new tool which is better than the formers' has arrived too in Python land.
But 'venv detection' is conventionally (everywhere) done by a stupid, never-designed involving inspecting argv0. At the same time, most Python packaging tools make various assumptions like "if I'm in a venv, I can write into it", or "there will only ever be one level of venvery, so I'm not in a venv iff I'm looking at the system Python".
Nesting/layering venvs has never been supported by any Python dependency management tool except for experiments that were rejected by more senior people in the community, the maintainers of major dependency management tools, etc.
Same kind of thing goes for allowing multiple versions of a dependency in the dependency tree at runtime.
There's little appetite for fixing Python's dependency management issues in a principled or general way, because that would require breaking things or adding limitations. You can't have dependencies partially managed by arbitrary code that runs only at install time and also build good packaging tools, for instance.
Of course we haven't figured it out yet.
The native ABI PyObject Cuda .so .dll shit had wayy to many serious problems.
Other lang also had the same problem, think something like cgo or JNI
They have a compatibility matrix. It's mostly red. https://wiki.qt.io/Qt_for_Python#Python_compatibility_matrix
How do you even make your stuff dependent on/broken with specific Python versions? I mean how in hell?
The fact that venv is so widely used in Python was always an indication that not all was well dependency-wise, but it doesn't seem like it's optional anymore.
https://docs.python.org/3/c-api/stable.html
As time passes, it makes sense to support only recent versions of both Qt and Python, hence that matrix.
If pyenv and poetry solve all your problems then it's a perfectly fine setup.
There's also the pip-compatible mode of uv which is much, much faster than pip.
I'm very wary of a new tool announcement that doesn't appear to mention why the existing tools/solutions were not sufficient. Which gap does this fill?
Edit: the answer to my second question is at https://venvstacks.lmstudio.ai/design/