Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Kernel] add triton fused moe kernel for gptq/awq #12185

Open
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

jinzhen-lin
Copy link
Contributor

@jinzhen-lin jinzhen-lin commented Jan 18, 2025

The current only option for using moe+gptq/awq is the Marlin kernel, but for the Marlin kernel, a single marlin_gemm_moe would launching num_experts CUDA kernels at least, while the fused_moe triton kernel only needs to launch one cuda kernel. This makes the Marlin kernel significantly slower than the fused_moe triton kernel.

This PR adds support for fused_moe triton kernel with gptq/awq.

Generation speed of deepseek-v3-awq (8*A100-SXM4-80GB, bs=1, short prompt)

marlin moe kernel triton fused moe kernel
w/o #12222 5.4tok/s 10.0tok/s
w/ #12222 11.1tok/s 29.6 tok/s

Note:

  1. [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) #12222 enable cuda graph and add shared memory moe_align_block_size kernel support for deepseek-v3
  2. to enable this kernel
python -m vllm.entrypoints.openai.api_server \
    --served-model-name model \
    --model cognitivecomputations/DeepSeek-V3-AWQ \
    --tensor-parallel-size 8 \
    --trust-remote-code \
    --max-model-len 24576 \
    --dtype half \
    --max-num-seqs 16 \
    --gpu-memory-utilization 0.96 \
    --quantization moe_wna16

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
@jinzhen-lin jinzhen-lin force-pushed the triton_fused_moe_int4 branch from 91b41c6 to 87e191f Compare January 18, 2025 12:08
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
@jinzhen-lin jinzhen-lin force-pushed the triton_fused_moe_int4 branch from 21c1d8d to 99f23f2 Compare January 18, 2025 12:14
@mgoin mgoin self-requested a review January 18, 2025 21:47
Signed-off-by: Jinzhen Lin <[email protected]>
@jinzhen-lin jinzhen-lin force-pushed the triton_fused_moe_int4 branch from 55102d9 to 15ae02b Compare January 19, 2025 06:44
@casper-hansen
Copy link
Contributor

@mgoin @robertgshaw2-redhat Could we expedite this PR + #12036 (not sure if #12204 is needed too or has overlap) now that DeepSeek has released their full lineup?

@jinzhen-lin
Copy link
Contributor Author

@mgoin @robertgshaw2-redhat Could we expedite this PR + #12036 (not sure if #12204 is needed too or has overlap) now that DeepSeek has released their full lineup?

I created a new PR with better moe_align_block_size just now, you can take a look at it. #12222

@casper-hansen
Copy link
Contributor

I think this PR could be closed in favor of #12222. Thanks for your work @jinzhen-lin

@jinzhen-lin
Copy link
Contributor Author

I think this PR could be closed in favor of #12222. Thanks for your work @jinzhen-lin

#12222 is an optimiztion over #12036 or #12204, it can be combined with this PR to get a better performance.

@mgoin
Copy link
Member

mgoin commented Jan 20, 2025

Thank you for the work! We will take a look now

@mgoin
Copy link
Member

mgoin commented Jan 20, 2025

Considering that this is allowing for "another option" to run quantized moe models, maybe we should consider writing a documentation page specifically for moe quantization.

I think the best case for this kernel to be used more broadly would be to have a heuristic on the number of experts or some configuration to decide whether to use the triton or marlin kernel

@jinzhen-lin
Copy link
Contributor Author

Considering that this is allowing for "another option" to run quantized moe models, maybe we should consider writing a documentation page specifically for moe quantization.

I think the best case for this kernel to be used more broadly would be to have a heuristic on the number of experts or some configuration to decide whether to use the triton or marlin kernel

I test with small moe model (https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4) just now, triton kernel seems much faster than marlin kernel too. Besides, marlin kernel seems generate wrong result for Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4.

Test result on A100 * 1:

marlin kernel:

$time curl -X POST http://127.0.0.1:8000/v1/chat/completions     -H 'Content-Type: application/json'     -d '{ "model": "model", "temperature": 0.0, "messages": [ { "role": "user", "content": "write a very long article" } ], "stream": false, "max_tokens": 512, "min_tokens": 512}'
{"id":"chatcmpl-99d4b90d4ee14c15a82bd278b3cbcfd1","object":"chat.completion","created":1737439221,"model":"model","choices":[{"index":0,"message":{"role":"assistant","content":"数,数,数,数,数数,数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数数","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":24,"total_tokens":536,"completion_tokens":512,"prompt_tokens_details":null},"prompt_logprobs":null}
real    0m6.618s
user    0m0.002s
sys     0m0.003s

triton kernel

$time curl -X POST http://127.0.0.1:8000/v1/chat/completions     -H 'Content-Type: application/json'     -d '{ "model": "model", "temperature": 0.0, "messages": [ { "role": "user", "content": "write a very long article" } ], "stream": false, "max_tokens": 512, "min_tokens": 512}'
{"id":"chatcmpl-d2f6f9d66c4b40c0b7dad3e1c9fc3d74","object":"chat.completion","created":1737439295,"model":"model","choices":[{"index":0,"message":{"role":"assistant","content":"The Benefits of Regular Exercise: A Comprehensive Guide\n\nIntroduction\n\nRegular exercise is an essential component of a healthy lifestyle. It not only helps maintain a healthy weight, but also has numerous other benefits for both physical and mental health. In this article, we will explore the various benefits of regular exercise, including improved cardiovascular health, increased strength and endurance, better mental health, and a reduced risk of chronic diseases. We will also discuss the different types of exercises and how to incorporate them into your routine for maximum effectiveness.\n\nImproved Cardiovascular Health\n\nRegular exercise strengthens the heart and improves its efficiency, reducing the risk of heart disease. Engaging in activities like brisk walking, running, cycling, or swimming can help lower blood pressure, cholesterol levels, and triglyceride levels. This, in turn, reduces the risk of heart attack and stroke. Additionally, regular exercise can also help maintain a healthy weight, which further reduces the strain on the heart.\n\nIncreased Strength and Endurance\n\nExercise helps build muscle strength and endurance, which is crucial for maintaining independence and mobility as we age. Regular strength training, such as weightlifting or bodyweight exercises, can help increase muscle mass, improve bone density, and enhance overall physical performance. This, in turn, can lead to an increased sense of well-being and confidence.\n\nBetter Mental Health\n\nExercise has been shown to have a positive impact on mental health. Physical activity releases endorphins, which are natural mood-boosting chemicals in the brain. Regular exercise can help reduce symptoms of depression and anxiety, improve self-esteem, and increase overall happiness. Additionally, engaging in activities like yoga or meditation can help reduce stress and promote relaxation.\n\nReduced Risk of Chronic Diseases\n\nRegular exercise can significantly reduce the risk of developing chronic diseases such as type 2 diabetes, certain types of cancer, and osteoporosis. Physical activity helps regulate blood sugar levels, which is particularly beneficial for those with diabetes. Exercise also helps maintain a healthy weight, which reduces the risk of developing these diseases. Furthermore, regular physical activity can improve bone density, reducing the risk of osteoporosis.\n\nIncorporating Exercise into Your Routine\n\nTo maximize the benefits of exercise, it's essential to incorporate a variety of activities into your routine. This can include:\n\n1. Cardiovascular exercises: Activities like running, cycling, or swimming can help improve cardiovascular health and burn calories.\n2. Strength training: Incorporating weightlifting or bodyweight exercises can help build muscle mass, increase bone density, and improve overall physical performance.\n3. Flexibility and balance exercises","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":24,"total_tokens":536,"completion_tokens":512,"prompt_tokens_details":null},"prompt_logprobs":null}
real    0m4.016s
user    0m0.000s
sys     0m0.005s

Maybe we should set triton kernel as default moe gptq/awq kernel? But I am not sure how to do this, gptq-marlin-moe is a part of gpt-marlin quanzation method, if I change moe kernel of gptq-marlin method, user cannot use gptq-marlin-moe anyway. Is that ok?

@mgoin
Copy link
Member

mgoin commented Jan 22, 2025

I test with small moe model (https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4) just now, triton kernel seems much faster than marlin kernel too. Besides, marlin kernel seems generate wrong result for Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4.

Thank you for benchmarking. I think this case is still exercising the scenario where the experts are small, so if you could benchmark a more where the experts are few and large such as Mixtral 8x7B or 8x22B, I would feel more confident towards using this kernel by default.

I think clearly we can land this as-is and treat this as opt-in for the moment. We can followup later if we want to use this by default in all cases or based on a heuristic.

@mgoin
Copy link
Member

mgoin commented Jan 22, 2025

@jinzhen-lin I tried loading an awq mixtral model and it failed to pass the right kwargs through to AWQMarlin

vllm serve hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4 --quantization moe_wna16 
...
  File "/home/mgoin/code/vllm/vllm/model_executor/models/mixtral_quant.py", line 197, in __init__
    self.qkv_proj = QKVParallelLinear(
                    ^^^^^^^^^^^^^^^^^^
  File "/home/mgoin/code/vllm/vllm/model_executor/layers/linear.py", line 723, in __init__
    super().__init__(input_size=input_size,
  File "/home/mgoin/code/vllm/vllm/model_executor/layers/linear.py", line 291, in __init__
    super().__init__(input_size, output_size, skip_bias_add, params_dtype,
  File "/home/mgoin/code/vllm/vllm/model_executor/layers/linear.py", line 177, in __init__
    self.quant_method = quant_config.get_quant_method(self,
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mgoin/code/vllm/vllm/model_executor/layers/quantization/moe_wna16.py", line 144, in get_quant_method
    return quant_method_cls(quant_config_cls(self.full_config))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: AWQMarlinConfig.__init__() missing 3 required positional arguments: 'group_size', 'zero_point', and 'lm_head_quantized'

Since many of the config initializers don't have extra kwargs, you will likely need to check the named args of each initializer to attempt to prune the full_config before passing in unzipped

@jinzhen-lin
Copy link
Contributor Author

@jinzhen-lin I tried loading an awq mixtral model and it failed to pass the right kwargs through to AWQMarlin

vllm serve hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4 --quantization moe_wna16 
...
  File "/home/mgoin/code/vllm/vllm/model_executor/models/mixtral_quant.py", line 197, in __init__
    self.qkv_proj = QKVParallelLinear(
                    ^^^^^^^^^^^^^^^^^^
  File "/home/mgoin/code/vllm/vllm/model_executor/layers/linear.py", line 723, in __init__
    super().__init__(input_size=input_size,
  File "/home/mgoin/code/vllm/vllm/model_executor/layers/linear.py", line 291, in __init__
    super().__init__(input_size, output_size, skip_bias_add, params_dtype,
  File "/home/mgoin/code/vllm/vllm/model_executor/layers/linear.py", line 177, in __init__
    self.quant_method = quant_config.get_quant_method(self,
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mgoin/code/vllm/vllm/model_executor/layers/quantization/moe_wna16.py", line 144, in get_quant_method
    return quant_method_cls(quant_config_cls(self.full_config))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: AWQMarlinConfig.__init__() missing 3 required positional arguments: 'group_size', 'zero_point', and 'lm_head_quantized'

Since many of the config initializers don't have extra kwargs, you will likely need to check the named args of each initializer to attempt to prune the full_config before passing in unzipped

Sorry, the commit serveral hours ago introduced this bug. It should be quant_config_cls.from_config(self.full_config).

Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
@mgoin
Copy link
Member

mgoin commented Jan 22, 2025

Thank you, it seems to work fine now.

I ran a 128/128 benchmark at 10QPS for the mixtral awq model on H100 and found that the marlin kernels were more performant. In the future we could make a heuristic to choose the kernel based on model configuration, but for now let's keep the kernel as opt-in

awq_marlin

vllm serve hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4 --disable-log-requests

python benchmarks/benchmark_serving.py --backend openai-chat --base-url http://0.0.0.0:8000/v1 --endpoint /chat/completions --model hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4 --dataset-name random --random-input-len 128 --random-output-len 128  --num_prompts 300 --request-rate 10
INFO 01-22 17:07:43 __init__.py:179] Automatically detected platform cuda.
Namespace(backend='openai-chat', base_url='http://0.0.0.0:8000/v1', host='localhost', port=8000, endpoint='/chat/completions', dataset=None, dataset_name='random', dataset_path=None, max_concurrency=None, model='hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=300, logprobs=None, request_rate=10.0, burstiness=1.0, seed=0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, random_input_len=128, random_output_len=128, random_range_ratio=1.0, random_prefix_len=0, hf_subset=None, hf_split=None, hf_output_len=None, tokenizer_mode='auto')
Starting initial single prompt test run...
Initial test run completed. Starting main benchmark run...
Traffic request rate: 10.0
Burstiness factor: 1.0 (Poisson process)
Maximum request concurrency: None
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 300/300 [00:34<00:00,  8.73it/s]
============ Serving Benchmark Result ============
Successful requests:                     300       
Benchmark duration (s):                  34.37     
Total input tokens:                      38400     
Total generated tokens:                  27905     
Request throughput (req/s):              8.73      
Output token throughput (tok/s):         811.82    
Total Token throughput (tok/s):          1928.96   
---------------Time to First Token----------------
Mean TTFT (ms):                          101.11    
Median TTFT (ms):                        97.63     
P99 TTFT (ms):                           270.17    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          72.93     
Median TPOT (ms):                        81.64     
P99 TPOT (ms):                           94.08     
---------------Inter-token Latency----------------
Mean ITL (ms):                           72.38     
Median ITL (ms):                         49.60     
P99 ITL (ms):                            309.09    
==================================================

moe_wna16

vllm serve hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4 --disable-log-requests --quantization moe_wna16

python benchmarks/benchmark_serving.py --backend openai-chat --base-url http://0.0.0.0:8000/v1 --endpoint /chat/completions --model hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4 --dataset-name random --random-input-len 128 --random-output-len 128  --num_prompts 300 --request-rate 10
INFO 01-22 17:04:51 __init__.py:179] Automatically detected platform cuda.
Namespace(backend='openai-chat', base_url='http://0.0.0.0:8000/v1', host='localhost', port=8000, endpoint='/chat/completions', dataset=None, dataset_name='random', dataset_path=None, max_concurrency=None, model='hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=300, logprobs=None, request_rate=10.0, burstiness=1.0, seed=0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, random_input_len=128, random_output_len=128, random_range_ratio=1.0, random_prefix_len=0, hf_subset=None, hf_split=None, hf_output_len=None, tokenizer_mode='auto')
Starting initial single prompt test run...
Initial test run completed. Starting main benchmark run...
Traffic request rate: 10.0
Burstiness factor: 1.0 (Poisson process)
Maximum request concurrency: None
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 300/300 [00:37<00:00,  8.02it/s]
============ Serving Benchmark Result ============
Successful requests:                     300       
Benchmark duration (s):                  37.40     
Total input tokens:                      38400     
Total generated tokens:                  28139     
Request throughput (req/s):              8.02      
Output token throughput (tok/s):         752.29    
Total Token throughput (tok/s):          1778.90   
---------------Time to First Token----------------
Mean TTFT (ms):                          207.58    
Median TTFT (ms):                        193.52    
P99 TTFT (ms):                           433.49    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          140.24    
Median TPOT (ms):                        142.47    
P99 TPOT (ms):                           218.78    
---------------Inter-token Latency----------------
Mean ITL (ms):                           137.56    
Median ITL (ms):                         85.32     
P99 ITL (ms):                            715.01    
==================================================

@casper-hansen
Copy link
Contributor

Thank you, it seems to work fine now.

I ran a 128/128 benchmark at 10QPS for the mixtral awq model on H100 and found that the marlin kernels were more performant. In the future we could make a heuristic to choose the kernel based on model configuration, but for now let's keep the kernel as opt-in

This makes sense since Mixtral has few experts, excited to get this into main to test it out! The main thing I see optimized here is the number of kernel launches. It should still be more performant higher number of experts, not sure where the exact threshold is but 32 or 64 is probably a good minimum.

@mgoin mgoin added quantization moe ready ONLY add when PR is ready to merge/full CI is needed labels Jan 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
moe quantization ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants