paddlespeech.t2s.models.vits.generator module

Generator module in VITS.

This code is based on https://github.com/jaywalnut310/vits.

class paddlespeech.t2s.models.vits.generator.VITSGenerator(vocabs: int, aux_channels: int = 513, hidden_channels: int = 192, spks: Optional[int] = None, langs: Optional[int] = None, spk_embed_dim: Optional[int] = None, global_channels: int = -1, segment_size: int = 32, text_encoder_attention_heads: int = 2, text_encoder_ffn_expand: int = 4, text_encoder_blocks: int = 6, text_encoder_positionwise_layer_type: str = 'conv1d', text_encoder_positionwise_conv_kernel_size: int = 1, text_encoder_positional_encoding_layer_type: str = 'rel_pos', text_encoder_self_attention_layer_type: str = 'rel_selfattn', text_encoder_activation_type: str = 'swish', text_encoder_normalize_before: bool = True, text_encoder_dropout_rate: float = 0.1, text_encoder_positional_dropout_rate: float = 0.0, text_encoder_attention_dropout_rate: float = 0.0, text_encoder_conformer_kernel_size: int = 7, use_macaron_style_in_text_encoder: bool = True, use_conformer_conv_in_text_encoder: bool = True, decoder_kernel_size: int = 7, decoder_channels: int = 512, decoder_upsample_scales: List[int] = [8, 8, 2, 2], decoder_upsample_kernel_sizes: List[int] = [16, 16, 4, 4], decoder_resblock_kernel_sizes: List[int] = [3, 7, 11], decoder_resblock_dilations: List[List[int]] = [[1, 3, 5], [1, 3, 5], [1, 3, 5]], use_weight_norm_in_decoder: bool = True, posterior_encoder_kernel_size: int = 5, posterior_encoder_layers: int = 16, posterior_encoder_stacks: int = 1, posterior_encoder_base_dilation: int = 1, posterior_encoder_dropout_rate: float = 0.0, use_weight_norm_in_posterior_encoder: bool = True, flow_flows: int = 4, flow_kernel_size: int = 5, flow_base_dilation: int = 1, flow_layers: int = 4, flow_dropout_rate: float = 0.0, use_weight_norm_in_flow: bool = True, use_only_mean_in_flow: bool = True, stochastic_duration_predictor_kernel_size: int = 3, stochastic_duration_predictor_dropout_rate: float = 0.5, stochastic_duration_predictor_flows: int = 4, stochastic_duration_predictor_dds_conv_layers: int = 3)[source]

Bases: Layer

Generator module in VITS. This is a module of VITS described in `Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech`_. As text encoder, we use conformer architecture instead of the relative positional Transformer, which contains additional convolution layers. .. _`Conditional Variational Autoencoder with Adversarial Learning for End-to-End

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(text, text_lengths, feats, feats_lengths)

Calculate forward propagation. Args: text (Tensor): Text index tensor (B, T_text). text_lengths (Tensor): Text length tensor (B,). feats (Tensor): Feature tensor (B, aux_channels, T_feats). feats_lengths (Tensor): Feature length tensor (B,). sids (Optional[Tensor]): Speaker index tensor (B,) or (B, 1). spembs (Optional[Tensor]): Speaker embedding tensor (B, spk_embed_dim). lids (Optional[Tensor]): Language index tensor (B,) or (B, 1). Returns: Tensor: Waveform tensor (B, 1, segment_size * upsample_factor). Tensor: Duration negative log-likelihood (NLL) tensor (B,). Tensor: Monotonic attention weight tensor (B, 1, T_feats, T_text). Tensor: Segments start index tensor (B,). Tensor: Text mask tensor (B, 1, T_text). Tensor: Feature mask tensor (B, 1, T_feats). tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]: - Tensor: Posterior encoder hidden representation (B, H, T_feats). - Tensor: Flow hidden representation (B, H, T_feats). - Tensor: Expanded text encoder projected mean (B, H, T_feats). - Tensor: Expanded text encoder projected scale (B, H, T_feats). - Tensor: Posterior encoder projected mean (B, H, T_feats). - Tensor: Posterior encoder projected scale (B, H, T_feats).

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

inference(text, text_lengths[, feats, ...])

Run inference. Args: text (Tensor): Input text index tensor (B, T_text,). text_lengths (Tensor): Text length tensor (B,). feats (Tensor): Feature tensor (B, aux_channels, T_feats,). feats_lengths (Tensor): Feature length tensor (B,). sids (Optional[Tensor]): Speaker index tensor (B,) or (B, 1). spembs (Optional[Tensor]): Speaker embedding tensor (B, spk_embed_dim). lids (Optional[Tensor]): Language index tensor (B,) or (B, 1). dur (Optional[Tensor]): Ground-truth duration (B, T_text,). If provided, skip the prediction of durations (i.e., teacher forcing). noise_scale (float): Noise scale parameter for flow. noise_scale_dur (float): Noise scale parameter for duration predictor. alpha (float): Alpha parameter to control the speed of generated speech. max_len (Optional[int]): Maximum length of acoustic feature sequence. use_teacher_forcing (bool): Whether to use teacher forcing. Returns: Tensor: Generated waveform tensor (B, T_wav). Tensor: Monotonic attention weight tensor (B, T_feats, T_text). Tensor: Duration tensor (B, T_text).

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

voice_conversion([feats, feats_lengths, ...])

Run voice conversion. Args: feats (Tensor): Feature tensor (B, aux_channels, T_feats,). feats_lengths (Tensor): Feature length tensor (B,). sids_src (Optional[Tensor]): Speaker index tensor of source feature (B,) or (B, 1). sids_tgt (Optional[Tensor]): Speaker index tensor of target feature (B,) or (B, 1). spembs_src (Optional[Tensor]): Speaker embedding tensor of source feature (B, spk_embed_dim). spembs_tgt (Optional[Tensor]): Speaker embedding tensor of target feature (B, spk_embed_dim). lids (Optional[Tensor]): Language index tensor (B,) or (B, 1). Returns: Tensor: Generated waveform tensor (B, T_wav).

backward

register_state_dict_hook

forward(text: Tensor, text_lengths: Tensor, feats: Tensor, feats_lengths: Tensor, sids: Optional[Tensor] = None, spembs: Optional[Tensor] = None, lids: Optional[Tensor] = None) Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]][source]

Calculate forward propagation. Args:

text (Tensor):

Text index tensor (B, T_text).

text_lengths (Tensor):

Text length tensor (B,).

feats (Tensor):

Feature tensor (B, aux_channels, T_feats).

feats_lengths (Tensor):

Feature length tensor (B,).

sids (Optional[Tensor]):

Speaker index tensor (B,) or (B, 1).

spembs (Optional[Tensor]):

Speaker embedding tensor (B, spk_embed_dim).

lids (Optional[Tensor]):

Language index tensor (B,) or (B, 1).

Returns:
Tensor:

Waveform tensor (B, 1, segment_size * upsample_factor).

Tensor:

Duration negative log-likelihood (NLL) tensor (B,).

Tensor:

Monotonic attention weight tensor (B, 1, T_feats, T_text).

Tensor:

Segments start index tensor (B,).

Tensor:

Text mask tensor (B, 1, T_text).

Tensor:

Feature mask tensor (B, 1, T_feats). tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]:

  • Tensor: Posterior encoder hidden representation (B, H, T_feats).

  • Tensor: Flow hidden representation (B, H, T_feats).

  • Tensor: Expanded text encoder projected mean (B, H, T_feats).

  • Tensor: Expanded text encoder projected scale (B, H, T_feats).

  • Tensor: Posterior encoder projected mean (B, H, T_feats).

  • Tensor: Posterior encoder projected scale (B, H, T_feats).

inference(text: Tensor, text_lengths: Tensor, feats: Optional[Tensor] = None, feats_lengths: Optional[Tensor] = None, sids: Optional[Tensor] = None, spembs: Optional[Tensor] = None, lids: Optional[Tensor] = None, dur: Optional[Tensor] = None, noise_scale: float = 0.667, noise_scale_dur: float = 0.8, alpha: float = 1.0, max_len: Optional[int] = None, use_teacher_forcing: bool = False) Tuple[Tensor, Tensor, Tensor][source]

Run inference. Args:

text (Tensor):

Input text index tensor (B, T_text,).

text_lengths (Tensor):

Text length tensor (B,).

feats (Tensor):

Feature tensor (B, aux_channels, T_feats,).

feats_lengths (Tensor):

Feature length tensor (B,).

sids (Optional[Tensor]):

Speaker index tensor (B,) or (B, 1).

spembs (Optional[Tensor]):

Speaker embedding tensor (B, spk_embed_dim).

lids (Optional[Tensor]):

Language index tensor (B,) or (B, 1).

dur (Optional[Tensor]):

Ground-truth duration (B, T_text,). If provided, skip the prediction of durations (i.e., teacher forcing).

noise_scale (float):

Noise scale parameter for flow.

noise_scale_dur (float):

Noise scale parameter for duration predictor.

alpha (float):

Alpha parameter to control the speed of generated speech.

max_len (Optional[int]):

Maximum length of acoustic feature sequence.

use_teacher_forcing (bool):

Whether to use teacher forcing.

Returns:
Tensor:

Generated waveform tensor (B, T_wav).

Tensor:

Monotonic attention weight tensor (B, T_feats, T_text).

Tensor:

Duration tensor (B, T_text).

voice_conversion(feats: Optional[Tensor] = None, feats_lengths: Optional[Tensor] = None, sids_src: Optional[Tensor] = None, sids_tgt: Optional[Tensor] = None, spembs_src: Optional[Tensor] = None, spembs_tgt: Optional[Tensor] = None, lids: Optional[Tensor] = None) Tensor[source]

Run voice conversion. Args:

feats (Tensor):

Feature tensor (B, aux_channels, T_feats,).

feats_lengths (Tensor):

Feature length tensor (B,).

sids_src (Optional[Tensor]):

Speaker index tensor of source feature (B,) or (B, 1).

sids_tgt (Optional[Tensor]):

Speaker index tensor of target feature (B,) or (B, 1).

spembs_src (Optional[Tensor]):

Speaker embedding tensor of source feature (B, spk_embed_dim).

spembs_tgt (Optional[Tensor]):

Speaker embedding tensor of target feature (B, spk_embed_dim).

lids (Optional[Tensor]):

Language index tensor (B,) or (B, 1).

Returns:
Tensor:

Generated waveform tensor (B, T_wav).