Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance]: Discussion about optimizing _prepare_model_input_tensors #6684

Closed
phantomlei3 opened this issue Jul 23, 2024 · 4 comments
Closed
Labels
performance Performance-related issues stale

Comments

@phantomlei3
Copy link

phantomlei3 commented Jul 23, 2024

Misc discussion on performance

Checking #6164, _prepare_model_input_tensors has been refactored for the purpose of performance improvement. I investigated the performance of _prepare_model_input_tensors with respect to different batch sizes, input sequence length, output sequence length and tensor parallel nums through running benchmark_latency.py. I found a directly proportional relationship between the time duration of _prepare_model_input_tensors and batch sizes (aka seq_group), which is a obvious operation that can be speeded up through parallelizing the loop in _prepare_model_input_tensors. Here comes my questions related to the follow up mentioned in #6164:

  1. What will the design of "Parallelize the loop for seq_group_metadata in seq_group_metadata_list " to speed up? Using threadpool?
  2. Are we going to implement a cuda kernel that can "Remove the loop for seq_id in seq_ids in ModelRunnerInputBuilder._add_seq_group()"?
  3. When will these follow-up optimizations be available? I would like to know if I can give contributions.
@phantomlei3 phantomlei3 added the performance Performance-related issues label Jul 23, 2024
@phantomlei3
Copy link
Author

phantomlei3 commented Jul 23, 2024

@comaniac Hope you can answer these questions.

@comaniac
Copy link
Collaborator

Hey @phantomlei3 thanks for your interest and questions!

What will the design of "Parallelize the loop for seq_group_metadata in seq_group_metadata_list " to speed up? Using threadpool?

Thread pool may not be helpful because Python doesn't have true multi-threading yet and this loop is not I/O bound. Ideally we should consider multi-processing but this definitely needs some investigations.

Are we going to implement a cuda kernel that can "Remove the loop for seq_id in seq_ids in ModelRunnerInputBuilder._add_seq_group()"?

After refactoring this loop doesn't include any GPU operations. Instead it only processes inputs in Python, and .build() is now in charge of moving them to GPU. The major optimization I'm thinking here is leveraging efficient CPU tensor operations such as numpy to speedup.

When will these follow-up optimizations be available? I would like to know if I can give contributions.

There's no concrete timeline yet but your contributions are definitely welcome. Are you in the vLLM discord? Please ping me there (the same ID as my GitHub one) and we could discuss details.

Copy link

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

@github-actions github-actions bot added the stale label Oct 25, 2024
Copy link

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance-related issues stale
Projects
None yet
Development

No branches or pull requests

2 participants