from TTS.tts.models.xtts import Xtts
config = XttsConfig()
config.load_json(“/path/to/xtts/config.json”)
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir=“/path/to/xtts/”, eval=True)
model.cuda()outputs = model.synthesize(
“It took me quite a long time to develop a voice and now that I have it I am not going to be silent.”,
config,
speaker_wav=“/data/TTS-public/_refclips/3.wav”,
gpt_cond_len=3,
language=“en”,
)
使用 Docker 搭建XTTS流式服务
CUDA 12.1:
docker run --gpus=all -e COQUI_TOS_AGREED=1 -v /path/to/model/folder:/app/tts_models --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121
CUDA 11.8:
docker run --gpus=all -e COQUI_TOS_AGREED=1 -v /path/to/model/folder:/app/tts_models --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest
CPU:
docker run -e COQUI_TOS_AGREED=1 -v /path/to/model/folder:/app/tts_models --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cpu
确保将模型文件放在 /path/to/model/folder 目录下。
Docker 容器运行后,可以测试它是否正常工作。
xtts-streaming-server调用示例
git clone https://github.com/coqui-ai/xtts-streaming-server
cd xtts-streaming-server/test