paddlespeech.t2s.models.ernie_sat.ernie_sat module

class paddlespeech.t2s.models.ernie_sat.ernie_sat.ErnieSAT(idim: int, odim: int, postnet_layers: int = 5, postnet_filts: int = 5, postnet_chans: int = 256, use_scaled_pos_enc: bool = False, encoder_type: str = 'conformer', decoder_type: str = 'conformer', enc_input_layer: str = 'sega_mlm', enc_pre_speech_layer: int = 0, enc_cnn_module_kernel: int = 7, enc_attention_dim: int = 384, enc_attention_heads: int = 2, enc_linear_units: int = 1536, enc_num_blocks: int = 4, enc_dropout_rate: float = 0.2, enc_positional_dropout_rate: float = 0.2, enc_attention_dropout_rate: float = 0.2, enc_normalize_before: bool = True, enc_macaron_style: bool = True, enc_use_cnn_module: bool = True, enc_selfattention_layer_type: str = 'legacy_rel_selfattn', enc_activation_type: str = 'swish', enc_pos_enc_layer_type: str = 'legacy_rel_pos', enc_positionwise_layer_type: str = 'conv1d', enc_positionwise_conv_kernel_size: int = 3, text_masking: bool = False, dec_cnn_module_kernel: int = 31, dec_attention_dim: int = 384, dec_attention_heads: int = 2, dec_linear_units: int = 1536, dec_num_blocks: int = 4, dec_dropout_rate: float = 0.2, dec_positional_dropout_rate: float = 0.2, dec_attention_dropout_rate: float = 0.2, dec_macaron_style: bool = True, dec_use_cnn_module: bool = True, dec_selfattention_layer_type: str = 'legacy_rel_selfattn', dec_activation_type: str = 'swish', dec_pos_enc_layer_type: str = 'legacy_rel_pos', dec_positionwise_layer_type: str = 'conv1d', dec_positionwise_conv_kernel_size: int = 3, init_type: str = 'xavier_uniform')[source]

Bases: Layer

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(speech, text, masked_pos, ...)

Defines the computation performed at every call.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

inference

register_state_dict_hook

forward(speech: Tensor, text: Tensor, masked_pos: Tensor, speech_mask: Tensor, text_mask: Tensor, speech_seg_pos: Tensor, text_seg_pos: Tensor)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:

*inputs(tuple): unpacked tuple arguments **kwargs(dict): unpacked dict arguments

inference(speech: Tensor, text: Tensor, masked_pos: Tensor, speech_mask: Tensor, text_mask: Tensor, speech_seg_pos: Tensor, text_seg_pos: Tensor, span_bdy: List[int], use_teacher_forcing: bool = True) Dict[str, Tensor][source]
class paddlespeech.t2s.models.ernie_sat.ernie_sat.ErnieSATInference(normalizer, model)[source]

Bases: Layer

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(speech, text, masked_pos, ...[, ...])

Defines the computation performed at every call.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(speech: Tensor, text: Tensor, masked_pos: Tensor, speech_mask: Tensor, text_mask: Tensor, speech_seg_pos: Tensor, text_seg_pos: Tensor, span_bdy: List[int], use_teacher_forcing: bool = True)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:

*inputs(tuple): unpacked tuple arguments **kwargs(dict): unpacked dict arguments

class paddlespeech.t2s.models.ernie_sat.ernie_sat.MLM(odim: int, encoder: Layer, decoder: Optional[Layer], postnet_layers: int = 0, postnet_chans: int = 0, postnet_filts: int = 0, text_masking: bool = False)[source]

Bases: Layer

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(*inputs, **kwargs)

Defines the computation performed at every call.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

inference(speech, text, masked_pos, ...[, ...])

Args:

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

inference(speech: Tensor, text: Tensor, masked_pos: Tensor, speech_mask: Tensor, text_mask: Tensor, speech_seg_pos: Tensor, text_seg_pos: Tensor, span_bdy: List[int], use_teacher_forcing: bool = True) List[Tensor][source]
Args:
speech (paddle.Tensor):

input speech (1, Tmax, D).

text (paddle.Tensor):

input text (1, Tmax2).

masked_pos (paddle.Tensor):

masked position of input speech (1, Tmax)

speech_mask (paddle.Tensor):

mask of speech (1, 1, Tmax).

text_mask (paddle.Tensor):

mask of text (1, 1, Tmax2).

speech_seg_pos (paddle.Tensor):

n-th phone of each mel, 0<=n<=Tmax2 (1, Tmax).

text_seg_pos (paddle.Tensor):

n-th phone of each phone, 0<=n<=Tmax2 (1, Tmax2).

span_bdy (List[int]):

masked mel boundary of input speech (2,)

use_teacher_forcing (bool):

whether to use teacher forcing

Returns:
List[Tensor]:

eg: [Tensor(shape=[1, 181, 80]), Tensor(shape=[80, 80]), Tensor(shape=[1, 67, 80])]

class paddlespeech.t2s.models.ernie_sat.ernie_sat.MLMDecoder(idim: int, vocab_size: int = 0, pre_speech_layer: int = 0, attention_dim: int = 256, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.0, input_layer: str = 'conv2d', normalize_before: bool = True, concat_after: bool = False, positionwise_layer_type: str = 'linear', positionwise_conv_kernel_size: int = 1, macaron_style: bool = False, pos_enc_layer_type: str = 'abs_pos', pos_enc_class=None, selfattention_layer_type: str = 'selfattn', activation_type: str = 'swish', use_cnn_module: bool = False, zero_triu: bool = False, cnn_module_kernel: int = 31, padding_idx: int = -1, stochastic_depth_rate: float = 0.0, text_masking: bool = False)[source]

Bases: MLMEncoder

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(xs, masks)

Encode input sequence.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(xs: Tensor, masks: Tensor)[source]

Encode input sequence.

Args:
xs (paddle.Tensor):

Input tensor (#batch, time, idim).

masks (paddle.Tensor):

Mask tensor (#batch, time).

Returns:
paddle.Tensor:

Output tensor (#batch, time, attention_dim).

paddle.Tensor:

Mask tensor (#batch, time).

class paddlespeech.t2s.models.ernie_sat.ernie_sat.MLMDualMaksing(odim: int, encoder: Layer, decoder: Optional[Layer], postnet_layers: int = 0, postnet_chans: int = 0, postnet_filts: int = 0, text_masking: bool = False)[source]

Bases: MLM

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(speech, text, masked_pos, ...)

Defines the computation performed at every call.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

inference(speech, text, masked_pos, ...[, ...])

Args:

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(speech: Tensor, text: Tensor, masked_pos: Tensor, speech_mask: Tensor, text_mask: Tensor, speech_seg_pos: Tensor, text_seg_pos: Tensor)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:

*inputs(tuple): unpacked tuple arguments **kwargs(dict): unpacked dict arguments

class paddlespeech.t2s.models.ernie_sat.ernie_sat.MLMEncAsDecoder(odim: int, encoder: Layer, decoder: Optional[Layer], postnet_layers: int = 0, postnet_chans: int = 0, postnet_filts: int = 0, text_masking: bool = False)[source]

Bases: MLM

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(speech, text, masked_pos, ...)

Defines the computation performed at every call.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

inference(speech, text, masked_pos, ...[, ...])

Args:

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(speech: Tensor, text: Tensor, masked_pos: Tensor, speech_mask: Tensor, text_mask: Tensor, speech_seg_pos: Tensor, text_seg_pos: Tensor)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:

*inputs(tuple): unpacked tuple arguments **kwargs(dict): unpacked dict arguments

class paddlespeech.t2s.models.ernie_sat.ernie_sat.MLMEncoder(idim: int, vocab_size: int = 0, pre_speech_layer: int = 0, attention_dim: int = 256, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.0, input_layer: str = 'conv2d', normalize_before: bool = True, concat_after: bool = False, positionwise_layer_type: str = 'linear', positionwise_conv_kernel_size: int = 1, macaron_style: bool = False, pos_enc_layer_type: str = 'abs_pos', pos_enc_class=None, selfattention_layer_type: str = 'selfattn', activation_type: str = 'swish', use_cnn_module: bool = False, zero_triu: bool = False, cnn_module_kernel: int = 31, padding_idx: int = -1, stochastic_depth_rate: float = 0.0, text_masking: bool = False)[source]

Bases: Layer

Conformer encoder module.

Args:
idim (int):

Input dimension.

attention_dim (int):

Dimension of attention.

attention_heads (int):

The number of heads of multi head attention.

linear_units (int):

The number of units of position-wise feed forward.

num_blocks (int):

The number of decoder blocks.

dropout_rate (float):

Dropout rate.

positional_dropout_rate (float):

Dropout rate after adding positional encoding.

attention_dropout_rate (float):

Dropout rate in attention.

input_layer (Union[str, paddle.nn.Layer]):

Input layer type.

normalize_before (bool):

Whether to use layer_norm before the first block.

concat_after (bool):

Whether to concat attention layer's input and output. if True, additional linear will be applied. i.e. x -> x + linear(concat(x, att(x))) if False, no additional linear will be applied. i.e. x -> x + att(x)

positionwise_layer_type (str):

"linear", "conv1d", or "conv1d-linear".

positionwise_conv_kernel_size (int):

Kernel size of positionwise conv1d layer.

macaron_style (bool):

Whether to use macaron style for positionwise layer.

pos_enc_layer_type (str):

Encoder positional encoding layer type.

selfattention_layer_type (str):

Encoder attention layer type.

activation_type (str):

Encoder activation function type.

use_cnn_module (bool):

Whether to use convolution module.

zero_triu (bool):

Whether to zero the upper triangular part of attention matrix.

cnn_module_kernel (int):

Kernerl size of convolution module.

padding_idx (int):

Padding idx for input_layer=embed.

stochastic_depth_rate (float):

Maximum probability to skip the encoder layer.

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(speech, text, masked_pos[, ...])

Encode input sequence.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(speech: Tensor, text: Tensor, masked_pos: Tensor, speech_mask: Optional[Tensor] = None, text_mask: Optional[Tensor] = None, speech_seg_pos: Optional[Tensor] = None, text_seg_pos: Optional[Tensor] = None)[source]

Encode input sequence.

class paddlespeech.t2s.models.ernie_sat.ernie_sat.MaskInputLayer(out_features: int)[source]

Bases: Layer

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(input[, masked_pos])

Defines the computation performed at every call.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(input: Tensor, masked_pos: Optional[Tensor] = None) Tensor[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:

*inputs(tuple): unpacked tuple arguments **kwargs(dict): unpacked dict arguments

class paddlespeech.t2s.models.ernie_sat.ernie_sat.mySequential(*layers)[source]

Bases: Sequential

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(*inputs)

Defines the computation performed at every call.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(*inputs)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:

*inputs(tuple): unpacked tuple arguments **kwargs(dict): unpacked dict arguments