forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 66
Issues: HabanaAI/vllm-fork
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Bug]: the new triton version (3.2.0) is not compatible
bug
Something isn't working
#732
opened Jan 23, 2025 by
lkk12014402
1 task done
[Bug]: Llama 3.2 11B Vision not working with vllm serve on Gaudi 2
bug
Something isn't working
#710
opened Jan 20, 2025 by
akarX23
1 task done
[Bug]: Cannot Run Qwen2 Embedding Model on Gaudi
bug
Something isn't working
#583
opened Dec 4, 2024 by
rvoleti89
1 task done
[Bug]: the generated text on BFloat16 is not as good as that on Float32.
bug
Something isn't working
#443
opened Oct 29, 2024 by
ccrhx4
1 task done
[RFC]: change VLLM_DECODE_BLOCK_BUCKET_* design to fit small AND large batch size at one warmup
intel
Issues or PRs submitted by Intel
#328
opened Sep 24, 2024 by
ccrhx4
1 task done
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.