paddlespeech.cli.tts.infer module

class paddlespeech.cli.tts.infer.TTSExecutor[source]

Bases: BaseExecutor

Methods

disable_task_loggers()

Disable all loggers in current task.

execute(argv)

Command line entry.

get_input_source(input_)

Get task input source from command line input.

infer(text[, lang, am, spk_id])

Model inference and result stored in self.output.

postprocess([output])

Output postprocess and return results.

postprocess_onnx([output])

Output postprocess and return results.

preprocess(input, *args, **kwargs)

Input preprocess and return paddle.Tensor stored in self._inputs.

process_task_results(input_, results[, ...])

Handling task results and redirect stdout if needed.

show_rtf(info)

Calculate rft of current task and show results.

__call__

infer_onnx

execute(argv: List[str]) bool[source]

Command line entry.

infer(text: str, lang: str = 'zh', am: str = 'fastspeech2_csmsc', spk_id: int = 0)[source]

Model inference and result stored in self.output.

infer_onnx(text: str, lang: str = 'zh', am: str = 'fastspeech2_csmsc', spk_id: int = 0)[source]
postprocess(output: str = 'output.wav') Union[str, PathLike][source]

Output postprocess and return results. This method get model output from self._outputs and convert it into human-readable results.

Returns:

Union[str, os.PathLike]: Human-readable results such as texts and audio files.

postprocess_onnx(output: str = 'output.wav') Union[str, PathLike][source]

Output postprocess and return results. This method get model output from self._outputs and convert it into human-readable results.

Returns:

Union[str, os.PathLike]: Human-readable results such as texts and audio files.

preprocess(input: Any, *args, **kwargs)[source]

Input preprocess and return paddle.Tensor stored in self._inputs. Input content can be a text(tts), a file(asr, cls), a stream(not supported yet) or anything needed.

Args:

input (Any): Input text/file/stream or other content.