-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensors of the same index must be on the same device and the same dtype except step
tensors that can be CPU and float32 notwithstanding
#315
Comments
Please reply if you managed to solve this issue! The error is raised on the optimizer.step() when running the fine tune script as described in the README on a fresh new cloned repository, i suspect this is due to errors in newer pytorch versions, see pytorch/pytorch#127197 {'loss': 1.0453, 'grad_norm': 1.2838969230651855, 'learning_rate': 1.5557084630007206e-05, 'epoch': 1.0} File "/home/h/miniconda3/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 75, in wrapper File "/home/h/miniconda3/lib/python3.9/site-packages/torch/optim/optimizer.py", line 385, in wrapper File "/home/h/miniconda3/lib/python3.9/site-packages/torch/optim/optimizer.py", line 76, in _use_grad File "/home/h/miniconda3/lib/python3.9/site-packages/torch/optim/adamw.py", line 516, in _multi_tensor_adamw File "/home/h/miniconda3/lib/python3.9/site-packages/torch/utils/_foreach_utils.py", line 38, in _group_tensors_by_device_and_dtype RuntimeError: Tensors of the same index must be on the same device and the same dtype except |
For me the issue was solved by downgrading the transformers package to transformers==4.28.1 (and using torch==2.5.1) |
When I was training, this error occurred
The text was updated successfully, but these errors were encountered: