Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trainer compute_loss signature mismatch with newer transformers version #159

Open
maxjeblick opened this issue Nov 4, 2024 · 3 comments

Comments

@maxjeblick
Copy link

Current transformers version 4.46.1 def compute_loss signature changed causing issues when importing and using
from tevatron.retriever.trainer import TevatronTrainer as Trainer (The transformers code change is probably due to the recent fix w.r.t. gradient accumulation).

Changing the loss signature to
def compute_loss(self, model, inputs, return_outputs=False, num_items_in_batch=None):
in the trainer fixes the issue. This seems to be backward compatible to older transformer versions.

@liyongkang123
Copy link
Contributor

Thanks, I submitted a pull request #161 here, and I hope it can be merged asap.

@liyongkang123
Copy link
Contributor

Done

@MXueguang
Copy link
Contributor

sorry for the late response. Thank you @liyongkang123 for fixing the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants