paddlespeech.s2t.modules.decoder module

Decoder definition.

class paddlespeech.s2t.modules.decoder.TransformerDecoder(vocab_size: int, encoder_output_size: int, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, self_attention_dropout_rate: float = 0.0, src_attention_dropout_rate: float = 0.0, input_layer: str = 'embed', use_output_layer: bool = True, normalize_before: bool = True, concat_after: bool = False, max_len: int = 5000)[source]

Bases: BatchScorerInterface, Layer

Base class of Transfomer decoder module. Args:

vocab_size: output dim encoder_output_size: dimension of attention attention_heads: the number of heads of multi head attention linear_units: the hidden units number of position-wise feedforward num_blocks: the number of decoder blocks dropout_rate: dropout rate self_attention_dropout_rate: dropout rate for attention input_layer: input layer type, embed use_output_layer: whether to use output layer pos_enc_class: PositionalEncoding module normalize_before:

True: use layer_norm before each sub-block of a layer. False: use layer_norm after each sub-block of a layer.

concat_after: whether to concat attention layer's input and output

True: x -> x + linear(concat(x, att(x))) False: x -> x + att(x)

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

batch_init_state(x)

Get an initial state for decoding (optional).

batch_score(ys, states, xs)

Score new token batch (required).

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

final_score(state)

Score eos (optional).

forward(memory, memory_mask, ys_in_pad, ...)

Forward decoder. Args: memory: encoded memory, float32 (batch, maxlen_in, feat) memory_mask: encoder memory mask, (batch, 1, maxlen_in) ys_in_pad: padded input token ids, int64 (batch, maxlen_out) ys_in_lens: input lengths of this batch (batch) r_ys_in_pad: not used in transformer decoder, in order to unify api with bidirectional decoder reverse_weight: not used in transformer decoder, in order to unify api with bidirectional decode Returns: (tuple): tuple containing: x: decoded token score before softmax (batch, maxlen_out, vocab_size) if use_output_layer is True, olens: (batch, ).

forward_one_step(memory, memory_mask, tgt, ...)

Forward one step.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

init_state(x)

Get an initial state for decoding (optional).

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

score(ys, state, x)

Score.

select_state(state, i[, new_id])

Select state with relative ids in the main beam search.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

batch_score(ys: Tensor, states: List[Any], xs: Tensor) Tuple[Tensor, List[Any]][source]

Score new token batch (required).

Args:

ys (paddle.Tensor): paddle.int64 prefix tokens (n_batch, ylen). states (List[Any]): Scorer states for prefix tokens. xs (paddle.Tensor):

The encoder feature that generates ys (n_batch, xlen, n_feat).

Returns:
tuple[paddle.Tensor, List[Any]]: Tuple of

batchfied scores for next token with shape of (n_batch, n_vocab) and next state list for ys.

forward(memory: ~paddle.Tensor, memory_mask: ~paddle.Tensor, ys_in_pad: ~paddle.Tensor, ys_in_lens: ~paddle.Tensor, r_ys_in_pad: ~paddle.Tensor = Tensor(shape=[0], dtype=float32, place=Place(cpu), stop_gradient=True,        []), reverse_weight: float = 0.0) Tuple[Tensor, Tensor][source]

Forward decoder. Args:

memory: encoded memory, float32 (batch, maxlen_in, feat) memory_mask: encoder memory mask, (batch, 1, maxlen_in) ys_in_pad: padded input token ids, int64 (batch, maxlen_out) ys_in_lens: input lengths of this batch (batch) r_ys_in_pad: not used in transformer decoder, in order to unify api

with bidirectional decoder

reverse_weight: not used in transformer decoder, in order to unify

api with bidirectional decode

Returns:
(tuple): tuple containing:
x: decoded token score before softmax (batch, maxlen_out, vocab_size)

if use_output_layer is True,

olens: (batch, )

forward_one_step(memory: Tensor, memory_mask: Tensor, tgt: Tensor, tgt_mask: Tensor, cache: Optional[List[Tensor]] = None) Tuple[Tensor, List[Tensor]][source]
Forward one step.

This is only used for decoding.

Args:

memory: encoded memory, float32 (batch, maxlen_in, feat) memory_mask: encoded memory mask, (batch, 1, maxlen_in) tgt: input token ids, int64 (batch, maxlen_out) tgt_mask: input token mask, (batch, maxlen_out, maxlen_out)

dtype=paddle.bool

cache: cached output list of (batch, max_time_out-1, size)

Returns:
y, cache: NN output value and cache per self.decoders.

y.shape` is (batch, token)

score(ys, state, x)[source]

Score. ys: (ylen,) x: (xlen, n_feat)