paddlespeech.t2s.modules.transformer.decoder module

Decoder definition.

class paddlespeech.t2s.modules.transformer.decoder.Decoder(odim, selfattention_layer_type='selfattn', attention_dim=256, attention_heads=4, conv_wshare=4, conv_kernel_length=11, conv_usebias=False, linear_units=2048, num_blocks=6, dropout_rate=0.1, positional_dropout_rate=0.1, self_attention_dropout_rate=0.0, src_attention_dropout_rate=0.0, input_layer='embed', use_output_layer=True, pos_enc_class=<class 'paddlespeech.t2s.modules.transformer.embedding.PositionalEncoding'>, normalize_before=True, concat_after=False)[source]

Bases: Layer

Transfomer decoder module.

Args:
odim (int):

Output diminsion.

self_attention_layer_type (str):

Self-attention layer type.

attention_dim (int):

Dimention of attention.

attention_heads (int):

The number of heads of multi head attention.

conv_wshare (int):

The number of kernel of convolution. Only used in self_attention_layer_type == "lightconv*" or "dynamiconv*".

conv_kernel_length (Union[int, str]):

Kernel size str of convolution (e.g. 71_71_71_71_71_71). Only used in self_attention_layer_type == "lightconv*" or "dynamiconv*".

conv_usebias (bool):

Whether to use bias in convolution. Only used in self_attention_layer_type == "lightconv*" or "dynamiconv*".

linear_units(int):

The number of units of position-wise feed forward.

num_blocks (int):

The number of decoder blocks.

dropout_rate (float):

Dropout rate.

positional_dropout_rate (float):

Dropout rate after adding positional encoding.

self_attention_dropout_rate (float):

Dropout rate in self-attention.

src_attention_dropout_rate (float):

Dropout rate in source-attention.

input_layer (Union[str, nn.Layer]):

Input layer type.

use_output_layer (bool):

Whether to use output layer.

pos_enc_class (nn.Layer):

Positional encoding module class. PositionalEncoding `or `ScaledPositionalEncoding

normalize_before (bool):

Whether to use layer_norm before the first block.

concat_after (bool):

Whether to concat attention layer's input and output. if True, additional linear will be applied. i.e. x -> x + linear(concat(x, att(x))) if False, no additional linear will be applied. i.e. x -> x + att(x)

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

batch_score(ys, states, xs)

Score new token batch (required).

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(tgt, tgt_mask, memory, memory_mask)

Forward decoder. Args: tgt(Tensor): Input token ids, int64 (#batch, maxlen_out) if input_layer == "embed". In the other case, input tensor (#batch, maxlen_out, odim). tgt_mask(Tensor): Input token mask (#batch, maxlen_out). memory(Tensor): Encoded memory, float32 (#batch, maxlen_in, feat). memory_mask(Tensor): Encoded memory mask (#batch, maxlen_in).

forward_one_step(tgt, tgt_mask, memory[, cache])

Forward one step.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

score(ys, state, x)

Score.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

batch_score(ys: Tensor, states: List[Any], xs: Tensor) Tuple[Tensor, List[Any]][source]

Score new token batch (required).

Args:
ys(Tensor):

paddle.int64 prefix tokens (n_batch, ylen).

states(List[Any]):

Scorer states for prefix tokens.

xs(Tensor):

The encoder feature that generates ys (n_batch, xlen, n_feat).

Returns:
tuple[Tensor, List[Any]]:

Tuple ofbatchfied scores for next token with shape of (n_batch, n_vocab) and next state list for ys.

forward(tgt, tgt_mask, memory, memory_mask)[source]

Forward decoder. Args:

tgt(Tensor):

Input token ids, int64 (#batch, maxlen_out) if input_layer == "embed". In the other case, input tensor (#batch, maxlen_out, odim).

tgt_mask(Tensor):

Input token mask (#batch, maxlen_out).

memory(Tensor):

Encoded memory, float32 (#batch, maxlen_in, feat).

memory_mask(Tensor):

Encoded memory mask (#batch, maxlen_in).

Returns:
Tensor:

Decoded token score before softmax (#batch, maxlen_out, odim) if use_output_layer is True. In the other case,final block outputs (#batch, maxlen_out, attention_dim).

Tensor:

Score mask before softmax (#batch, maxlen_out).

forward_one_step(tgt, tgt_mask, memory, cache=None)[source]

Forward one step.

Args:
tgt(Tensor):

Input token ids, int64 (#batch, maxlen_out).

tgt_mask(Tensor):

Input token mask (#batch, maxlen_out).

memory(Tensor):

Encoded memory, float32 (#batch, maxlen_in, feat).

cache((List[Tensor]), optional):

List of cached tensors. (Default value = None)

Returns:
Tensor:

Output tensor (batch, maxlen_out, odim).

List[Tensor]:

List of cache tensors of each decoder layer.

score(ys, state, x)[source]

Score.