You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to run this with half precision, to be able to use higher image resolution with limited VRAM?
I've tried to do it (similar way to how stable diffusion does):
added "cnn=cnn.half()" after caling loadCaffemodel
replaced all FloatTensor by HalfTensor in neural_style.py
It is running, but loss calculation is not working:
Running optimization with L-BFGS
Iteration 10 / 1000
Content 1 loss: nan
Style 1 loss: nan
Style 2 loss: nan
Style 3 loss: nan
Style 4 loss: nan
Style 5 loss: nan
Total loss: nan
any idea how to fix that?
The text was updated successfully, but these errors were encountered:
well, I guess something exceeds the maximal value of float16, which is a fairly small number 65500. For example, the Gram Matrix is approximately the sum of the feature^2, which easily gets very large. The MSE loss of two Gram Matrices could have even larger magnitude. A possible solution could be some alternative losses, e.g. BN loss, in MMD paper. But you have to adjust the style weight to get similar results with the ones with Gram Matrices.
Another option would be to find out what the max feature values are on average and then add a fixed scaling before calculating the gram matrix.
This way the magnitude of the squaring operation is essentially limited to 1. You'd have to make sure that the loss weights are also adjusted accordingly though.
Is it possible to run this with half precision, to be able to use higher image resolution with limited VRAM?
I've tried to do it (similar way to how stable diffusion does):
It is running, but loss calculation is not working:
Running optimization with L-BFGS
Iteration 10 / 1000
Content 1 loss: nan
Style 1 loss: nan
Style 2 loss: nan
Style 3 loss: nan
Style 4 loss: nan
Style 5 loss: nan
Total loss: nan
any idea how to fix that?
The text was updated successfully, but these errors were encountered: