paddlespeech.s2t.modules.encoder_layer module

Encoder self-attention layer definition.

class paddlespeech.s2t.modules.encoder_layer.ConformerEncoderLayer(size: int, self_attn: Layer, feed_forward: Optional[Layer] = None, feed_forward_macaron: Optional[Layer] = None, conv_module: Optional[Layer] = None, dropout_rate: float = 0.1, normalize_before: bool = True, concat_after: bool = False)[source]

Bases: Layer

Encoder layer module.

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(x, mask, pos_emb[, mask_pad, ...])

Compute encoded features. Args: x (paddle.Tensor): Input tensor (#batch, time, size). mask (paddle.Tensor): Mask tensor for the input (#batch, time, time). (0,0,0) means fake mask. pos_emb (paddle.Tensor): postional encoding, must not be None for ConformerEncoderLayer mask_pad (paddle.Tensor): batch padding mask used for conv module. (#batch, 1,time), (0, 0, 0) means fake mask. att_cache (paddle.Tensor): Cache tensor of the KEY & VALUE (#batch=1, head, cache_t1, d_k * 2), head * d_k == size. cnn_cache (paddle.Tensor): Convolution cache in conformer layer (1, #batch=1, size, cache_t2). First dim will not be used, just for dy2st. Returns: paddle.Tensor: Output tensor (#batch, time, size). paddle.Tensor: Mask tensor (#batch, time, time). paddle.Tensor: att_cache tensor, (#batch=1, head, cache_t1 + time, d_k * 2). paddle.Tensor: cnn_cahce tensor (#batch, size, cache_t2).

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(x: ~paddle.Tensor, mask: ~paddle.Tensor, pos_emb: ~paddle.Tensor, mask_pad: ~paddle.Tensor = Tensor(shape=[0, 0, 0], dtype=bool, place=Place(cpu), stop_gradient=True,        []), att_cache: ~paddle.Tensor = Tensor(shape=[0, 0, 0, 0], dtype=float32, place=Place(cpu), stop_gradient=True,        []), cnn_cache: ~paddle.Tensor = Tensor(shape=[0, 0, 0, 0], dtype=float32, place=Place(cpu), stop_gradient=True,        [])) Tuple[Tensor, Tensor, Tensor, Tensor][source]

Compute encoded features. Args:

x (paddle.Tensor): Input tensor (#batch, time, size). mask (paddle.Tensor): Mask tensor for the input (#batch, time, time).

(0,0,0) means fake mask.

pos_emb (paddle.Tensor): postional encoding, must not be None

for ConformerEncoderLayer

mask_pad (paddle.Tensor): batch padding mask used for conv module.

(#batch, 1,time), (0, 0, 0) means fake mask.

att_cache (paddle.Tensor): Cache tensor of the KEY & VALUE

(#batch=1, head, cache_t1, d_k * 2), head * d_k == size.

cnn_cache (paddle.Tensor): Convolution cache in conformer layer

(1, #batch=1, size, cache_t2). First dim will not be used, just for dy2st.

Returns:

paddle.Tensor: Output tensor (#batch, time, size). paddle.Tensor: Mask tensor (#batch, time, time). paddle.Tensor: att_cache tensor,

(#batch=1, head, cache_t1 + time, d_k * 2).

paddle.Tensor: cnn_cahce tensor (#batch, size, cache_t2).

class paddlespeech.s2t.modules.encoder_layer.SqueezeformerEncoderLayer(size: int, self_attn: Layer, feed_forward1: Optional[Layer] = None, conv_module: Optional[Layer] = None, feed_forward2: Optional[Layer] = None, normalize_before: bool = False, dropout_rate: float = 0.1, concat_after: bool = False)[source]

Bases: Layer

Encoder layer module.

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(x, mask, pos_emb[, mask_pad, ...])

Compute encoded features. Args: x (paddle.Tensor): Input tensor (#batch, time, size). mask (paddle.Tensor): Mask tensor for the input (#batch, time, time). (0,0,0) means fake mask. pos_emb (paddle.Tensor): postional encoding, must not be None for ConformerEncoderLayer mask_pad (paddle.Tensor): batch padding mask used for conv module. (#batch, 1,time), (0, 0, 0) means fake mask. att_cache (paddle.Tensor): Cache tensor of the KEY & VALUE (#batch=1, head, cache_t1, d_k * 2), head * d_k == size. cnn_cache (paddle.Tensor): Convolution cache in conformer layer (1, #batch=1, size, cache_t2). First dim will not be used, just for dy2st. Returns: paddle.Tensor: Output tensor (#batch, time, size). paddle.Tensor: Mask tensor (#batch, time, time). paddle.Tensor: att_cache tensor, (#batch=1, head, cache_t1 + time, d_k * 2). paddle.Tensor: cnn_cahce tensor (#batch, size, cache_t2).

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(x: ~paddle.Tensor, mask: ~paddle.Tensor, pos_emb: ~paddle.Tensor, mask_pad: ~paddle.Tensor = Tensor(shape=[0, 0, 0], dtype=bool, place=Place(cpu), stop_gradient=True,        []), att_cache: ~paddle.Tensor = Tensor(shape=[0, 0, 0, 0], dtype=float32, place=Place(cpu), stop_gradient=True,        []), cnn_cache: ~paddle.Tensor = Tensor(shape=[0, 0, 0, 0], dtype=float32, place=Place(cpu), stop_gradient=True,        [])) Tuple[Tensor, Tensor, Tensor, Tensor][source]

Compute encoded features. Args:

x (paddle.Tensor): Input tensor (#batch, time, size). mask (paddle.Tensor): Mask tensor for the input (#batch, time, time).

(0,0,0) means fake mask.

pos_emb (paddle.Tensor): postional encoding, must not be None

for ConformerEncoderLayer

mask_pad (paddle.Tensor): batch padding mask used for conv module.

(#batch, 1,time), (0, 0, 0) means fake mask.

att_cache (paddle.Tensor): Cache tensor of the KEY & VALUE

(#batch=1, head, cache_t1, d_k * 2), head * d_k == size.

cnn_cache (paddle.Tensor): Convolution cache in conformer layer

(1, #batch=1, size, cache_t2). First dim will not be used, just for dy2st.

Returns:

paddle.Tensor: Output tensor (#batch, time, size). paddle.Tensor: Mask tensor (#batch, time, time). paddle.Tensor: att_cache tensor,

(#batch=1, head, cache_t1 + time, d_k * 2).

paddle.Tensor: cnn_cahce tensor (#batch, size, cache_t2).

class paddlespeech.s2t.modules.encoder_layer.TransformerEncoderLayer(size: int, self_attn: Layer, feed_forward: Layer, dropout_rate: float, normalize_before: bool = True, concat_after: bool = False)[source]

Bases: Layer

Encoder layer module.

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(x, mask, pos_emb[, mask_pad, ...])

Compute encoded features. Args: x (paddle.Tensor): (#batch, time, size) mask (paddle.Tensor): Mask tensor for the input (#batch, time,time), (0, 0, 0) means fake mask. pos_emb (paddle.Tensor): just for interface compatibility to ConformerEncoderLayer mask_pad (paddle.Tensor): does not used in transformer layer, just for unified api with conformer. att_cache (paddle.Tensor): Cache tensor of the KEY & VALUE (#batch=1, head, cache_t1, d_k * 2), head * d_k == size. cnn_cache (paddle.Tensor): Convolution cache in conformer layer (#batch=1, size, cache_t2), not used here, it's for interface compatibility to ConformerEncoderLayer. Returns: paddle.Tensor: Output tensor (#batch, time, size). paddle.Tensor: Mask tensor (#batch, time, time). paddle.Tensor: att_cache tensor, (#batch=1, head, cache_t1 + time, d_k * 2). paddle.Tensor: cnn_cahce tensor (#batch=1, size, cache_t2).

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(x: ~paddle.Tensor, mask: ~paddle.Tensor, pos_emb: ~paddle.Tensor, mask_pad: ~paddle.Tensor = Tensor(shape=[0, 0, 0], dtype=bool, place=Place(cpu), stop_gradient=True,        []), att_cache: ~paddle.Tensor = Tensor(shape=[0, 0, 0, 0], dtype=float32, place=Place(cpu), stop_gradient=True,        []), cnn_cache: ~paddle.Tensor = Tensor(shape=[0, 0, 0, 0], dtype=float32, place=Place(cpu), stop_gradient=True,        [])) Tuple[Tensor, Tensor, Tensor, Tensor][source]

Compute encoded features. Args:

x (paddle.Tensor): (#batch, time, size) mask (paddle.Tensor): Mask tensor for the input (#batch, time,time),

(0, 0, 0) means fake mask.

pos_emb (paddle.Tensor): just for interface compatibility

to ConformerEncoderLayer

mask_pad (paddle.Tensor): does not used in transformer layer,

just for unified api with conformer.

att_cache (paddle.Tensor): Cache tensor of the KEY & VALUE

(#batch=1, head, cache_t1, d_k * 2), head * d_k == size.

cnn_cache (paddle.Tensor): Convolution cache in conformer layer

(#batch=1, size, cache_t2), not used here, it's for interface compatibility to ConformerEncoderLayer.

Returns:

paddle.Tensor: Output tensor (#batch, time, size). paddle.Tensor: Mask tensor (#batch, time, time). paddle.Tensor: att_cache tensor,

(#batch=1, head, cache_t1 + time, d_k * 2).

paddle.Tensor: cnn_cahce tensor (#batch=1, size, cache_t2).