paddlespeech.t2s.modules.transformer.attention module

Multi-Head Attention layer definition.

class paddlespeech.t2s.modules.transformer.attention.LegacyRelPositionMultiHeadedAttention(n_head, n_feat, dropout_rate, zero_triu=False)[source]

Bases: MultiHeadedAttention

Multi-Head Attention layer with relative position encoding (old version). Details can be found in https://github.com/espnet/espnet/pull/2816. Paper: https://arxiv.org/abs/1901.02860

Args:
n_head (int):

The number of heads.

n_feat (int):

The number of features.

dropout_rate (float):

Dropout rate.

zero_triu (bool):

Whether to zero the upper triangular part of attention matrix.

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(query, key, value, pos_emb, mask)

Compute 'Scaled Dot Product Attention' with rel.

forward_attention(value, scores[, mask])

Compute attention context vector.

forward_qkv(query, key, value)

Transform query, key and value.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

rel_shift(x)

Compute relative positional encoding. Args: x(Tensor): Input tensor (batch, head, time1, time2). Returns: Tensor:Output tensor.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(query, key, value, pos_emb, mask)[source]

Compute 'Scaled Dot Product Attention' with rel. positional encoding.

Args:

query(Tensor): Query tensor (#batch, time1, size). key(Tensor): Key tensor (#batch, time2, size). value(Tensor): Value tensor (#batch, time2, size). pos_emb(Tensor): Positional embedding tensor (#batch, time1, size). mask(Tensor): Mask tensor (#batch, 1, time2) or (#batch, time1, time2).

Returns:

Tensor: Output tensor (#batch, time1, d_model).

rel_shift(x)[source]

Compute relative positional encoding. Args:

x(Tensor):

Input tensor (batch, head, time1, time2).

Returns:

Tensor:Output tensor.

class paddlespeech.t2s.modules.transformer.attention.MultiHeadedAttention(n_head, n_feat, dropout_rate)[source]

Bases: Layer

Multi-Head Attention layer. Args:

n_head (int):

The number of heads.

n_feat (int):

The number of features.

dropout_rate (float):

Dropout rate.

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(query, key, value[, mask])

Compute scaled dot product attention.

forward_attention(value, scores[, mask])

Compute attention context vector.

forward_qkv(query, key, value)

Transform query, key and value.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(query, key, value, mask=None)[source]

Compute scaled dot product attention.

Args:
query(Tensor):

Query tensor (#batch, time1, size).

key(Tensor):

Key tensor (#batch, time2, size).

value(Tensor):

Value tensor (#batch, time2, size).

mask(Tensor, optional):

Mask tensor (#batch, 1, time2) or (#batch, time1, time2). (Default value = None)

Returns:

Tensor: Output tensor (#batch, time1, d_model).

forward_attention(value, scores, mask=None)[source]

Compute attention context vector.

Args:
value(Tensor):

Transformed value (#batch, n_head, time2, d_k).

scores(Tensor):

Attention score (#batch, n_head, time1, time2).

mask(Tensor, optional):

Mask (#batch, 1, time2) or (#batch, time1, time2). (Default value = None)

Returns:

Tensor: Transformed value (#batch, time1, d_model) weighted by the attention score (#batch, time1, time2).

forward_qkv(query, key, value)[source]

Transform query, key and value.

Args:
query(Tensor):

query tensor (#batch, time1, size).

key(Tensor):

Key tensor (#batch, time2, size).

value(Tensor):

Value tensor (#batch, time2, size).

Returns:
Tensor:

Transformed query tensor (#batch, n_head, time1, d_k).

Tensor:

Transformed key tensor (#batch, n_head, time2, d_k).

Tensor:

Transformed value tensor (#batch, n_head, time2, d_k).

class paddlespeech.t2s.modules.transformer.attention.RelPositionMultiHeadedAttention(n_head, n_feat, dropout_rate, zero_triu=False)[source]

Bases: MultiHeadedAttention

Multi-Head Attention layer with relative position encoding (new implementation). Details can be found in https://github.com/espnet/espnet/pull/2816. Paper: https://arxiv.org/abs/1901.02860

Args:
n_head (int):

The number of heads.

n_feat (int):

The number of features.

dropout_rate (float):

Dropout rate.

zero_triu (bool):

Whether to zero the upper triangular part of attention matrix.

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(query, key, value, pos_emb, mask)

Compute 'Scaled Dot Product Attention' with rel.

forward_attention(value, scores[, mask])

Compute attention context vector.

forward_qkv(query, key, value)

Transform query, key and value.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

rel_shift(x)

Compute relative positional encoding. Args: x(Tensor): Input tensor (batch, head, time1, 2*time1-1).

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

forward(query, key, value, pos_emb, mask)[source]

Compute 'Scaled Dot Product Attention' with rel. positional encoding.

Args:
query(Tensor):

Query tensor (#batch, time1, size).

key(Tensor):

Key tensor (#batch, time2, size).

value(Tensor):

Value tensor (#batch, time2, size).

pos_emb(Tensor):

Positional embedding tensor (#batch, 2*time1-1, size).

mask(Tensor):

Mask tensor (#batch, 1, time2) or (#batch, time1, time2).

Returns:

Tensor: Output tensor (#batch, time1, d_model).

rel_shift(x)[source]

Compute relative positional encoding. Args:

x(Tensor):

Input tensor (batch, head, time1, 2*time1-1).

Returns:

Tensor: Output tensor.