-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] Use different translation engines #13
Comments
Oh, that's very unfortunate. Luckily, I did add support for switching to a different instance, so you might have some luck with switching to one of these:
As for other engines, there are a few choices I might implement some day in this app's rewrite:
|
https://translate.plausibility.cloud The four work. Did you do a new xenial .click with that ? Currently i did a webber xenial shortcut with the first instance 🙂 Thank you |
In the settings, I recall there was a field to change the instance. Just replace the host and it should work just fine. |
I tested. I changed with the two first instance and it doesn't work for me (Volla), even after deleted cache and data and after shut down the phone and switch on again. As if no internet in the app. Edit : i tested in a BQ E5 HD . It doesn't work too. |
When will this great app be usable again? |
I miss the UT Translate app too. How can I help you? Speaking as someone who has trained neural machine translation models ... Facebook's smallest NLLB-200 model would be too big for most phones. The model alone is 2.46 GB and it requires PyTorch. (Most NMT frameworks require either PyTorch or TensorFlow). As suggested, it may be possible to quantize a translation model with GGML. But no one has done so yet, so we would have to do the work ourselves. With that in mind, I doubt that we'll be running translation models on our UT phones any time soon. Instead, the only alternative (in the short run) is to have the app make calls to a server where we would host our own model. Assuming that nobody is willing to sponsor a dedicated GPU for our model, translation times would be slow. But if our model were small (only the most popular languages), the wait might be tolerable. I miss the UT Translate app too. How can I help you? |
Healthy instances (Today) @walking-octopus if @thedaviddelta is unreachable or busy, may be you can maintain Lingva project too |
Thank you! :-) One note to add. When entering the instance in the app's settings, one must include "/api/v1" at the end. For example: |
One more translation engine. My own: https://translate.napizia.com/api/v1/ Ora pozzu traduciri frasi in sicilianu cu l'app. Grazzi! :-) |
Thanks for the reminder !! I had forgotten that ! Lingva instances actually work well in Translate-UT. |
Currently Lingva is down and we can't use your app.
It would be very great if your app could evolve to a translator which can use many translation engines. For example, I like very much using https://f-droid.org/en/packages/com.bnyro.translate/ with an another android AOSP smartphone.
The text was updated successfully, but these errors were encountered: