paddlespeech.s2t.modules.decoder_layer module

Decoder self-attention layer definition.

class paddlespeech.s2t.modules.decoder_layer.DecoderLayer(size: int, self_attn: Layer, src_attn: Layer, feed_forward: Layer, dropout_rate: float, normalize_before: bool = True, concat_after: bool = False)[source]

Bases: Layer

Single decoder layer module. Args:

size (int): Input dimension. self_attn (nn.Layer): Self-attention module instance.

MultiHeadedAttention instance can be used as the argument.

src_attn (nn.Layer): Self-attention module instance.

MultiHeadedAttention instance can be used as the argument.

feed_forward (nn.Layer): Feed-forward module instance.

PositionwiseFeedForward instance can be used as the argument.

dropout_rate (float): Dropout rate. normalize_before (bool):

True: use layer_norm before each sub-block. False: to use layer_norm after each sub-block.

concat_after (bool): Whether to concat attention layer's input

and output. True: x -> x + linear(concat(x, att(x))) False: x -> x + att(x)

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(tgt, tgt_mask, memory, memory_mask)

Compute decoded features. Args: tgt (paddle.Tensor): Input tensor (#batch, maxlen_out, size). tgt_mask (paddle.Tensor): Mask for input tensor (#batch, maxlen_out). memory (paddle.Tensor): Encoded memory (#batch, maxlen_in, size). memory_mask (paddle.Tensor): Encoded memory mask (#batch, maxlen_in). cache (paddle.Tensor): cached tensors. (#batch, maxlen_out - 1, size). Returns: paddle.Tensor: Output tensor (#batch, maxlen_out, size). paddle.Tensor: Mask for output tensor (#batch, maxlen_out). paddle.Tensor: Encoded memory (#batch, maxlen_in, size). paddle.Tensor: Encoded memory mask (#batch, maxlen_in).

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(tgt: Tensor, tgt_mask: Tensor, memory: Tensor, memory_mask: Tensor, cache: Optional[Tensor] = None) Tuple[Tensor, Tensor, Tensor, Tensor][source]

Compute decoded features. Args:

tgt (paddle.Tensor): Input tensor (#batch, maxlen_out, size). tgt_mask (paddle.Tensor): Mask for input tensor

(#batch, maxlen_out).

memory (paddle.Tensor): Encoded memory

(#batch, maxlen_in, size).

memory_mask (paddle.Tensor): Encoded memory mask

(#batch, maxlen_in).

cache (paddle.Tensor): cached tensors.

(#batch, maxlen_out - 1, size).

Returns:

paddle.Tensor: Output tensor (#batch, maxlen_out, size). paddle.Tensor: Mask for output tensor (#batch, maxlen_out). paddle.Tensor: Encoded memory (#batch, maxlen_in, size). paddle.Tensor: Encoded memory mask (#batch, maxlen_in).