Skip to content

Commit

Permalink
Fix diffusers attention register (#1131)
Browse files Browse the repository at this point in the history
This PR is done:

- [x] Fix: #1105
- [x] 修复 onediff_quant 的 from_pretrained 兼容 diffusers >= 0.29.1:
siliconflow/onediff-quant#39

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced attention mechanism to automatically revert to a standard
processor when using specific processor types from the `diffusers`
library.
- Introduced new command-line arguments for the text-to-image script:
`--compile_text_encoder` and `--graph`, allowing for more flexible model
compilation options.

- **Bug Fixes**
- Improved handling of processor types to ensure better compatibility
and functionality within the attention processing framework.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
  • Loading branch information
lixiang007666 authored Nov 4, 2024
1 parent 445a61e commit a6eafe5
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,6 @@ def parse_args():
args.model,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
pipe.to("cuda")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -361,6 +361,16 @@ def forward(
# here we simply pass along all tensors to the selected processor class
# For standard processors that are defined here, `**cross_attention_kwargs` is empty

from diffusers.models.attention_processor import (
AttnProcessor as DiffusersAttnProcessor,
AttnProcessor2_0 as DiffusersAttnProcessor2_0,
)

if isinstance(self.processor, DiffusersAttnProcessor) or isinstance(
self.processor, DiffusersAttnProcessor2_0
):
self.set_processor(AttnProcessor())

return self.processor(
self,
hidden_states,
Expand Down

0 comments on commit a6eafe5

Please sign in to comment.