paddlespeech.t2s.models.vits.vits module

VITS module

class paddlespeech.t2s.models.vits.vits.VITS(idim: int, odim: int, sampling_rate: int = 22050, generator_type: str = 'vits_generator', generator_params: Dict[str, Any] = {'decoder_channels': 512, 'decoder_kernel_size': 7, 'decoder_resblock_dilations': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'decoder_resblock_kernel_sizes': [3, 7, 11], 'decoder_upsample_kernel_sizes': [16, 16, 4, 4], 'decoder_upsample_scales': [8, 8, 2, 2], 'flow_base_dilation': 1, 'flow_dropout_rate': 0.0, 'flow_flows': 4, 'flow_kernel_size': 5, 'flow_layers': 4, 'global_channels': -1, 'hidden_channels': 192, 'langs': None, 'posterior_encoder_base_dilation': 1, 'posterior_encoder_dropout_rate': 0.0, 'posterior_encoder_kernel_size': 5, 'posterior_encoder_layers': 16, 'posterior_encoder_stacks': 1, 'segment_size': 32, 'spk_embed_dim': None, 'spks': None, 'stochastic_duration_predictor_dds_conv_layers': 3, 'stochastic_duration_predictor_dropout_rate': 0.5, 'stochastic_duration_predictor_flows': 4, 'stochastic_duration_predictor_kernel_size': 3, 'text_encoder_activation_type': 'swish', 'text_encoder_attention_dropout_rate': 0.0, 'text_encoder_attention_heads': 2, 'text_encoder_blocks': 6, 'text_encoder_conformer_kernel_size': 7, 'text_encoder_dropout_rate': 0.1, 'text_encoder_ffn_expand': 4, 'text_encoder_normalize_before': True, 'text_encoder_positional_dropout_rate': 0.0, 'text_encoder_positional_encoding_layer_type': 'rel_pos', 'text_encoder_positionwise_conv_kernel_size': 1, 'text_encoder_positionwise_layer_type': 'conv1d', 'text_encoder_self_attention_layer_type': 'rel_selfattn', 'use_conformer_conv_in_text_encoder': True, 'use_macaron_style_in_text_encoder': True, 'use_only_mean_in_flow': True, 'use_weight_norm_in_decoder': True, 'use_weight_norm_in_flow': True, 'use_weight_norm_in_posterior_encoder': True}, discriminator_type: str = 'hifigan_multi_scale_multi_period_discriminator', discriminator_params: Dict[str, Any] = {'follow_official_norm': False, 'period_discriminator_params': {'bias': True, 'channels': 32, 'downsample_scales': [3, 3, 3, 3, 1], 'in_channels': 1, 'kernel_sizes': [5, 3], 'max_downsample_channels': 1024, 'nonlinear_activation': 'leakyrelu', 'nonlinear_activation_params': {'negative_slope': 0.1}, 'out_channels': 1, 'use_spectral_norm': False, 'use_weight_norm': True}, 'periods': [2, 3, 5, 7, 11], 'scale_discriminator_params': {'bias': True, 'channels': 128, 'downsample_scales': [2, 2, 4, 4, 1], 'in_channels': 1, 'kernel_sizes': [15, 41, 5, 3], 'max_downsample_channels': 1024, 'max_groups': 16, 'nonlinear_activation': 'leakyrelu', 'nonlinear_activation_params': {'negative_slope': 0.1}, 'out_channels': 1, 'use_spectral_norm': False, 'use_weight_norm': True}, 'scale_downsample_pooling': 'AvgPool1D', 'scale_downsample_pooling_params': {'kernel_size': 4, 'padding': 2, 'stride': 2}, 'scales': 1}, cache_generator_outputs: bool = True)[source]

Bases: Layer

VITS module (generator + discriminator). This is a module of VITS described in `Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech`_. .. _`Conditional Variational Autoencoder with Adversarial Learning for End-to-End

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(text, text_lengths, feats, feats_lengths)

Perform generator forward. Args: text (Tensor): Text index tensor (B, T_text). text_lengths (Tensor): Text length tensor (B,). feats (Tensor): Feature tensor (B, T_feats, aux_channels). feats_lengths (Tensor): Feature length tensor (B,). sids (Optional[Tensor]): Speaker index tensor (B,) or (B, 1). spembs (Optional[Tensor]): Speaker embedding tensor (B, spk_embed_dim). lids (Optional[Tensor]): Language index tensor (B,) or (B, 1). forward_generator (bool): Whether to forward generator. Returns:.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

inference(text[, feats, sids, spembs, lids, ...])

Run inference. Args: text (Tensor): Input text index tensor (T_text,). feats (Tensor): Feature tensor (T_feats, aux_channels). sids (Tensor): Speaker index tensor (1,). spembs (Optional[Tensor]): Speaker embedding tensor (spk_embed_dim,). lids (Tensor): Language index tensor (1,). durations (Tensor): Ground-truth duration tensor (T_text,). noise_scale (float): Noise scale value for flow. noise_scale_dur (float): Noise scale value for duration predictor. alpha (float): Alpha parameter to control the speed of generated speech. max_len (Optional[int]): Maximum length. use_teacher_forcing (bool): Whether to use teacher forcing. Returns: Dict[str, Tensor]: * wav (Tensor): Generated waveform tensor (T_wav,). * att_w (Tensor): Monotonic attention weight tensor (T_feats, T_text). * duration (Tensor): Predicted duration tensor (T_text,).

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

voice_conversion(feats[, sids_src, ...])

Run voice conversion. Args: feats (Tensor): Feature tensor (T_feats, aux_channels). sids_src (Optional[Tensor]): Speaker index tensor of source feature (1,). sids_tgt (Optional[Tensor]): Speaker index tensor of target feature (1,). spembs_src (Optional[Tensor]): Speaker embedding tensor of source feature (spk_embed_dim,). spembs_tgt (Optional[Tensor]): Speaker embedding tensor of target feature (spk_embed_dim,). lids (Optional[Tensor]): Language index tensor (1,). Returns: Dict[str, Tensor]: * wav (Tensor): Generated waveform tensor (T_wav,).

backward

register_state_dict_hook

reset_parameters

forward(text: Tensor, text_lengths: Tensor, feats: Tensor, feats_lengths: Tensor, sids: Optional[Tensor] = None, spembs: Optional[Tensor] = None, lids: Optional[Tensor] = None, forward_generator: bool = True) Dict[str, Any][source]

Perform generator forward. Args:

text (Tensor):

Text index tensor (B, T_text).

text_lengths (Tensor):

Text length tensor (B,).

feats (Tensor):

Feature tensor (B, T_feats, aux_channels).

feats_lengths (Tensor):

Feature length tensor (B,).

sids (Optional[Tensor]):

Speaker index tensor (B,) or (B, 1).

spembs (Optional[Tensor]):

Speaker embedding tensor (B, spk_embed_dim).

lids (Optional[Tensor]):

Language index tensor (B,) or (B, 1).

forward_generator (bool):

Whether to forward generator.

Returns:

inference(text: Tensor, feats: Optional[Tensor] = None, sids: Optional[Tensor] = None, spembs: Optional[Tensor] = None, lids: Optional[Tensor] = None, durations: Optional[Tensor] = None, noise_scale: float = 0.667, noise_scale_dur: float = 0.8, alpha: float = 1.0, max_len: Optional[int] = None, use_teacher_forcing: bool = False) Dict[str, Tensor][source]

Run inference. Args:

text (Tensor):

Input text index tensor (T_text,).

feats (Tensor):

Feature tensor (T_feats, aux_channels).

sids (Tensor):

Speaker index tensor (1,).

spembs (Optional[Tensor]):

Speaker embedding tensor (spk_embed_dim,).

lids (Tensor):

Language index tensor (1,).

durations (Tensor):

Ground-truth duration tensor (T_text,).

noise_scale (float):

Noise scale value for flow.

noise_scale_dur (float):

Noise scale value for duration predictor.

alpha (float):

Alpha parameter to control the speed of generated speech.

max_len (Optional[int]):

Maximum length.

use_teacher_forcing (bool):

Whether to use teacher forcing.

Returns:
Dict[str, Tensor]:
  • wav (Tensor):

    Generated waveform tensor (T_wav,).

  • att_w (Tensor):

    Monotonic attention weight tensor (T_feats, T_text).

  • duration (Tensor):

    Predicted duration tensor (T_text,).

reset_parameters()[source]
voice_conversion(feats: Tensor, sids_src: Optional[Tensor] = None, sids_tgt: Optional[Tensor] = None, spembs_src: Optional[Tensor] = None, spembs_tgt: Optional[Tensor] = None, lids: Optional[Tensor] = None) Tensor[source]

Run voice conversion. Args:

feats (Tensor):

Feature tensor (T_feats, aux_channels).

sids_src (Optional[Tensor]):

Speaker index tensor of source feature (1,).

sids_tgt (Optional[Tensor]):

Speaker index tensor of target feature (1,).

spembs_src (Optional[Tensor]):

Speaker embedding tensor of source feature (spk_embed_dim,).

spembs_tgt (Optional[Tensor]):

Speaker embedding tensor of target feature (spk_embed_dim,).

lids (Optional[Tensor]):

Language index tensor (1,).

Returns:
Dict[str, Tensor]:
  • wav (Tensor):

    Generated waveform tensor (T_wav,).

class paddlespeech.t2s.models.vits.vits.VITSInference(model)[source]

Bases: Layer

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(text[, sids])

Defines the computation performed at every call.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(text, sids=None)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:

*inputs(tuple): unpacked tuple arguments **kwargs(dict): unpacked dict arguments