paddlespeech.t2s.datasets.vocoder_batch_fn module
- class paddlespeech.t2s.datasets.vocoder_batch_fn.Clip(batch_max_steps=20480, hop_size=256, aux_context_window=0)[source]
Bases:
object
Collate functor for training vocoders.
Methods
__call__
(batch)Convert into batch tensors.
- class paddlespeech.t2s.datasets.vocoder_batch_fn.Clip_static(batch_max_steps=20480, hop_size=256, aux_context_window=0)[source]
Bases:
Clip
Collate functor for training vocoders.
Methods
__call__
(batch)Convert into batch tensors.
- class paddlespeech.t2s.datasets.vocoder_batch_fn.WaveRNNClip(mode: str = 'RAW', batch_max_steps: int = 4500, hop_size: int = 300, aux_context_window: int = 2, bits: int = 9, mu_law: bool = True)[source]
Bases:
Clip
Methods
__call__
(batch)Convert into batch tensors. Args: batch (list): list of tuple of the pair of audio and features. Audio shape (T, ), features shape(T', C).
to_quant