-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] use_existing_torch.py: filter out comments #12255
Conversation
Signed-off-by: Thomas Parnell <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
cc @youkaichao |
Signed-off-by: Thomas Parnell <[email protected]>
See upstream PR: vllm-project/vllm#12255 Would like to merge the fix here asap though because it is breaking our images.
it usually means it needs to build against pytorch. and if you are using a custom pytorch build, the public version will not be applicable. |
The package and Unless I miss something here, it seems like the packages should be flexible enough to work with custom torch version (within reason). These are not optional dependencies and if they are not installed we get errors at import time like:
If the packages are not compatible with the custom torch version, that should be encoded in their package dependencies and the install should fail? IMO that doesn't mean we shouldn't try to install them? |
the script was originally developed for arm platform (GH200), and these packages do not have arm wheels. It seems we only run |
I see. Would it make sense to filter out the dependencies in a platform-specific way?
(not sure what the actual string is there). Probably if one wants to use vllm on that platform though, these packages will need to be compiled from source. |
can you try to remove these two comments and see if they build successfully? we intentionally consider comments in |
Sure, this change works fine in our setup (we are just using slightly older version of pytorch than vllm):
|
then why do we need this PR? |
If removing dependencies that have torch in the comment is the desired behaviour, we should not change the However, as it stands, if someone wants to use vLLM with slightly older pytorch version, running this script will strip out required dependencies (outlines, compressed-tensors) and the package will not work (see import errors above). If the comments for If we agree on the solution, I can either change this PR or create a new one. |
let's try this approach, thanks. |
Closing (will resolve in a different way as-per discussion). |
…ncies (#68) I already tried to fix this using #66 but upstream didn't like that change (the behaviour to filter out comments containing torch was intentional). After [some discussion](vllm-project/vllm#12255), we agreed on a different solution implemented in this PR. Note that I reverted the changes from #66 by force pushing main. Note this has already been merged upstream by vllm-project/vllm#12260 but I'm cherry-picking the fix here since it is blocking the CI builds.
The current
use_existing_torch.py
filters out any line that contains the wordtorch
, but right now inrequirements-commont.txt
we have these lines:and this is causing the lines to get removed:
This is breaking some of our images that use custom pytorch version, because these dependencies end up not getting installed.