-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Failed to find native CUDA module #33
Comments
Could you describe how you installed fast_rnnt? |
I used pip to install fast_rnnt. Now I have installed the k2 and the problem is solved by using the function in k2. |
Hi, we had the same error after the successful building fast_rnnt for AMD using Rocm 5.4 with correct installed pytorch 2.0.1 and torchaudio 0.15.2
We want to use only fast_rnnt without k2. We installed it via build from source
|
It seems that Rocm isn't supported in the build. |
@bene-ges Basically if pytorch can run on Rocm, fast_rnnt can also run on it. Will have a look at this issue. Thanks! |
But the core of fast_rnnt is the CUDA code, no? And I believe Rocm does not use cuda? So would require rewrite to support that?? |
@danpovey, rocm can compile CUDA code into the amd binary. Most of projects just add the rocm compile commands like Pytorch does. So the Pytorch build system can be an example of right solution Docs Example of conversion of CUDA code to ROCm code and its compilation (matrix-cuda is just example of cuda code) |
another useful link on porting CUDA (all notations almost identical) |
I can help with testing on amd if needed |
OK that's interesting. If it's possible for you to add support for ROCM into our build system (which is I think not entirely trivial), then I think we'd appreciate that very much. This kind of thing will no doubt be used more frequently in the future. |
RuntimeError: Failed to find native CUDA module, make sure that you compiled the code with K2_WITH_CUDA.
The text was updated successfully, but these errors were encountered: