Skip to content

Inference time changes based on how the model is loaded #238

Answered by eginhard
fentresspaul61B asked this question in Q&A
Discussion options

You must be logged in to vote

Then, in the actual TTS python code I add this:

LOCAL_PATH_DOCKER = "/root/.local/share/tts/tts_models--multilingual--multi-dataset--xtts_v2"

if os.path.isdir(LOCAL_PATH_DOCKER):
    print("Model file exists")
    tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2")
else:
    print("CANNOT FIND MODEL FILE")
    tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)

You didn't add .to(device) when running in Docker, so it's probably just slow then because it runs on the CPU?

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@fentresspaul61B
Comment options

Answer selected by fentresspaul61B
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants