Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Misc] Update Fused MoE weight loading #7334

Merged
merged 5 commits into from
Aug 13, 2024

Conversation

dsikka
Copy link
Contributor

@dsikka dsikka commented Aug 9, 2024

Summary:

  • Splits up [ Kernel ] AWQ Fused MoE #6422 into two separate PRs. This is the first of the two. The second will leverage the weight loading changes introduced in this PR while adding the AWQ Fused MoE Kernel
  • Refactors FusedMoE.weight_loader, to enable loading AWQ models, which have transposed weights (input_dim, output_dim) on disk. Fp16 and Fp8 models have share (input_dim, output_dim). This required more complex logic for handling indexing in the TP case and MergedColumn case
  • Refactors expert_params_mapping, which was overfit to fp16 and fp8 checkpoints. This required renaming the scale parameters in fp8 which to better match the state dicts that we create in autofp8, limiting the amount of remapping we need to do in the model files
  • Updates layers to use fused_topk/grouped_topk and fused_experts, rather than calling fused_moe directly, such that we can reuse the logic across fp16, fp8, and awq

Form Neural Magic
Co-authored by @robertgshaw2-neuralmagic

Copy link

github-actions bot commented Aug 9, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@dsikka dsikka marked this pull request as ready for review August 9, 2024 14:51
@dsikka dsikka changed the title [Misc] Update fused moe structure [Misc] Update Fused MoE weight loading Aug 9, 2024
@dsikka
Copy link
Contributor Author

dsikka commented Aug 9, 2024

/ready

@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 9, 2024
@dsikka dsikka mentioned this pull request Aug 9, 2024
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM but have you verified that DeepSeek MoE is okay with this PR?

vllm/model_executor/layers/fused_moe/layer.py Show resolved Hide resolved
vllm/model_executor/layers/fused_moe/layer.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/fused_moe/layer.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/fused_moe/layer.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/fused_moe/layer.py Show resolved Hide resolved
vllm/model_executor/layers/fused_moe/layer.py Outdated Show resolved Hide resolved
@dsikka dsikka requested a review from comaniac August 13, 2024 01:49
@dsikka
Copy link
Contributor Author

dsikka commented Aug 13, 2024

This LGTM but have you verified that DeepSeek MoE is okay with this PR?

yes. deepkseek, mixtral and qwen

@dsikka dsikka requested a review from mgoin August 13, 2024 17:18
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, this looks great after review!

@mgoin mgoin merged commit d3bdfd3 into vllm-project:main Aug 13, 2024
53 checks passed
@mgoin mgoin deleted the update-fused-moe branch August 13, 2024 18:57
kylesayrs pushed a commit to neuralmagic/vllm that referenced this pull request Aug 17, 2024
fialhocoelho pushed a commit to opendatahub-io/vllm that referenced this pull request Aug 22, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants