paddlespeech.t2s.modules.transformer.encoder module
- class paddlespeech.t2s.modules.transformer.encoder.BaseEncoder(idim: int, attention_dim: int = 256, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.0, input_layer: str = 'conv2d', normalize_before: bool = True, concat_after: bool = False, positionwise_layer_type: str = 'linear', positionwise_conv_kernel_size: int = 1, macaron_style: bool = False, pos_enc_layer_type: str = 'abs_pos', selfattention_layer_type: str = 'selfattn', activation_type: str = 'swish', use_cnn_module: bool = False, zero_triu: bool = False, cnn_module_kernel: int = 31, padding_idx: int = -1, stochastic_depth_rate: float = 0.0, intermediate_layers: Optional[List[int]] = None, encoder_type: str = 'transformer')[source]
Bases:
Layer
Base Encoder module.
- Args:
- idim (int):
Input dimension.
- attention_dim (int):
Dimention of attention.
- attention_heads (int):
The number of heads of multi head attention.
- linear_units (int):
The number of units of position-wise feed forward.
- num_blocks (int):
The number of decoder blocks.
- dropout_rate (float):
Dropout rate.
- positional_dropout_rate (float):
Dropout rate after adding positional encoding.
- attention_dropout_rate (float):
Dropout rate in attention.
- input_layer (Union[str, nn.Layer]):
Input layer type.
- normalize_before (bool):
Whether to use layer_norm before the first block.
- concat_after (bool):
Whether to concat attention layer's input and output. if True, additional linear will be applied. i.e. x -> x + linear(concat(x, att(x))) if False, no additional linear will be applied. i.e. x -> x + att(x)
- positionwise_layer_type (str):
"linear", "conv1d", or "conv1d-linear".
- positionwise_conv_kernel_size (int):
Kernel size of positionwise conv1d layer.
- macaron_style (bool):
Whether to use macaron style for positionwise layer.
- pos_enc_layer_type (str):
Encoder positional encoding layer type.
- selfattention_layer_type (str):
Encoder attention layer type.
- activation_type (str):
Encoder activation function type.
- use_cnn_module (bool):
Whether to use convolution module.
- zero_triu (bool):
Whether to zero the upper triangular part of attention matrix.
- cnn_module_kernel (int):
Kernerl size of convolution module.
- padding_idx (int):
Padding idx for input_layer=embed.
- stochastic_depth_rate (float):
Maximum probability to skip the encoder layer.
- intermediate_layers (Union[List[int], None]):
indices of intermediate CTC layer. indices start from 1. if not None, intermediate outputs are returned (which changes return type signature.)
encoder_type (str): "transformer", or "conformer".
Methods
__call__
(*inputs, **kwargs)Call self as a function.
add_parameter
(name, parameter)Adds a Parameter instance.
add_sublayer
(name, sublayer)Adds a sub Layer instance.
apply
(fn)Applies
fn
recursively to every sublayer (as returned by.sublayers()
) as well as self.buffers
([include_sublayers])Returns a list of all buffers from current layer and its sub-layers.
children
()Returns an iterator over immediate children layers.
clear_gradients
()Clear the gradients of all parameters for this layer.
create_parameter
(shape[, attr, dtype, ...])Create parameters for this layer.
create_tensor
([name, persistable, dtype])Create Tensor for this layer.
create_variable
([name, persistable, dtype])Create Tensor for this layer.
eval
()Sets this Layer and all its sublayers to evaluation mode.
extra_repr
()Extra representation of this layer, you can have custom implementation of your own layer.
forward
(xs, masks)Encode input sequence.
full_name
()Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__
get_positionwise_layer
([...])Define positionwise layer.
load_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
named_buffers
([prefix, include_sublayers])Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.
named_children
()Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.
named_parameters
([prefix, include_sublayers])Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.
named_sublayers
([prefix, include_self, ...])Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.
parameters
([include_sublayers])Returns a list of all Parameters from current layer and its sub-layers.
register_buffer
(name, tensor[, persistable])Registers a tensor as buffer into the layer.
register_forward_post_hook
(hook)Register a forward post-hook for Layer.
register_forward_pre_hook
(hook)Register a forward pre-hook for Layer.
set_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
set_state_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
state_dict
([destination, include_sublayers, ...])Get all parameters and persistable buffers of current layer and its sub-layers.
sublayers
([include_self])Returns a list of sub layers.
to
([device, dtype, blocking])Cast the parameters and buffers of Layer by the give device, dtype and blocking.
to_static_state_dict
([destination, ...])Get all parameters and buffers of current layer and its sub-layers.
train
()Sets this Layer and all its sublayers to training mode.
backward
get_embed
get_encoder_selfattn_layer
get_pos_enc_class
register_state_dict_hook
- forward(xs, masks)[source]
Encode input sequence.
- Args:
- xs (Tensor):
Input tensor (#batch, time, idim).
- masks (Tensor):
Mask tensor (#batch, 1, time).
- Returns:
- Tensor:
Output tensor (#batch, time, attention_dim).
- Tensor:
Mask tensor (#batch, 1, time).
- get_embed(idim, input_layer='conv2d', attention_dim: int = 256, pos_enc_class=<class 'paddlespeech.t2s.modules.transformer.embedding.PositionalEncoding'>, dropout_rate: int = 0.1, positional_dropout_rate: int = 0.1, padding_idx: int = -1)[source]
- get_encoder_selfattn_layer(selfattention_layer_type: str = 'selfattn', attention_heads: int = 4, attention_dim: int = 256, attention_dropout_rate: float = 0.0, zero_triu: bool = False, pos_enc_layer_type: str = 'abs_pos')[source]
- class paddlespeech.t2s.modules.transformer.encoder.CNNDecoder(emb_dim: int = 256, odim: int = 80, kernel_size: int = 5, dropout_rate: float = 0.2, resblock_kernel_sizes: List[int] = [256, 256])[source]
Bases:
Layer
Much simplified decoder than the original one with Prenet.
Methods
__call__
(*inputs, **kwargs)Call self as a function.
add_parameter
(name, parameter)Adds a Parameter instance.
add_sublayer
(name, sublayer)Adds a sub Layer instance.
apply
(fn)Applies
fn
recursively to every sublayer (as returned by.sublayers()
) as well as self.buffers
([include_sublayers])Returns a list of all buffers from current layer and its sub-layers.
children
()Returns an iterator over immediate children layers.
clear_gradients
()Clear the gradients of all parameters for this layer.
create_parameter
(shape[, attr, dtype, ...])Create parameters for this layer.
create_tensor
([name, persistable, dtype])Create Tensor for this layer.
create_variable
([name, persistable, dtype])Create Tensor for this layer.
eval
()Sets this Layer and all its sublayers to evaluation mode.
extra_repr
()Extra representation of this layer, you can have custom implementation of your own layer.
forward
(xs[, masks])Encode input sequence. Args: xs (Tensor): Input tensor (#batch, time, idim). masks (Tensor): Mask tensor (#batch, 1, time). Returns: Tensor: Output tensor (#batch, time, odim).
full_name
()Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__
load_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
named_buffers
([prefix, include_sublayers])Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.
named_children
()Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.
named_parameters
([prefix, include_sublayers])Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.
named_sublayers
([prefix, include_self, ...])Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.
parameters
([include_sublayers])Returns a list of all Parameters from current layer and its sub-layers.
register_buffer
(name, tensor[, persistable])Registers a tensor as buffer into the layer.
register_forward_post_hook
(hook)Register a forward post-hook for Layer.
register_forward_pre_hook
(hook)Register a forward pre-hook for Layer.
set_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
set_state_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
state_dict
([destination, include_sublayers, ...])Get all parameters and persistable buffers of current layer and its sub-layers.
sublayers
([include_self])Returns a list of sub layers.
to
([device, dtype, blocking])Cast the parameters and buffers of Layer by the give device, dtype and blocking.
to_static_state_dict
([destination, ...])Get all parameters and buffers of current layer and its sub-layers.
train
()Sets this Layer and all its sublayers to training mode.
backward
register_state_dict_hook
- class paddlespeech.t2s.modules.transformer.encoder.CNNPostnet(odim: int = 80, kernel_size: int = 5, dropout_rate: float = 0.2, resblock_kernel_sizes: List[int] = [256, 256])[source]
Bases:
Layer
Methods
__call__
(*inputs, **kwargs)Call self as a function.
add_parameter
(name, parameter)Adds a Parameter instance.
add_sublayer
(name, sublayer)Adds a sub Layer instance.
apply
(fn)Applies
fn
recursively to every sublayer (as returned by.sublayers()
) as well as self.buffers
([include_sublayers])Returns a list of all buffers from current layer and its sub-layers.
children
()Returns an iterator over immediate children layers.
clear_gradients
()Clear the gradients of all parameters for this layer.
create_parameter
(shape[, attr, dtype, ...])Create parameters for this layer.
create_tensor
([name, persistable, dtype])Create Tensor for this layer.
create_variable
([name, persistable, dtype])Create Tensor for this layer.
eval
()Sets this Layer and all its sublayers to evaluation mode.
extra_repr
()Extra representation of this layer, you can have custom implementation of your own layer.
forward
(xs[, masks])Encode input sequence. Args: xs (Tensor): Input tensor (#batch, odim, time). masks (Tensor): Mask tensor (#batch, 1, time). Returns: Tensor: Output tensor (#batch, odim, time).
full_name
()Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__
load_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
named_buffers
([prefix, include_sublayers])Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.
named_children
()Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.
named_parameters
([prefix, include_sublayers])Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.
named_sublayers
([prefix, include_self, ...])Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.
parameters
([include_sublayers])Returns a list of all Parameters from current layer and its sub-layers.
register_buffer
(name, tensor[, persistable])Registers a tensor as buffer into the layer.
register_forward_post_hook
(hook)Register a forward post-hook for Layer.
register_forward_pre_hook
(hook)Register a forward pre-hook for Layer.
set_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
set_state_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
state_dict
([destination, include_sublayers, ...])Get all parameters and persistable buffers of current layer and its sub-layers.
sublayers
([include_self])Returns a list of sub layers.
to
([device, dtype, blocking])Cast the parameters and buffers of Layer by the give device, dtype and blocking.
to_static_state_dict
([destination, ...])Get all parameters and buffers of current layer and its sub-layers.
train
()Sets this Layer and all its sublayers to training mode.
backward
register_state_dict_hook
- class paddlespeech.t2s.modules.transformer.encoder.ConformerEncoder(idim: int, attention_dim: int = 256, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.0, input_layer: str = 'conv2d', normalize_before: bool = True, concat_after: bool = False, positionwise_layer_type: str = 'linear', positionwise_conv_kernel_size: int = 1, macaron_style: bool = False, pos_enc_layer_type: str = 'rel_pos', selfattention_layer_type: str = 'rel_selfattn', activation_type: str = 'swish', use_cnn_module: bool = False, zero_triu: bool = False, cnn_module_kernel: int = 31, padding_idx: int = -1, stochastic_depth_rate: float = 0.0, intermediate_layers: Optional[List[int]] = None)[source]
Bases:
BaseEncoder
Conformer encoder module.
- Args:
- idim (int):
Input dimension.
- attention_dim (int):
Dimention of attention.
- attention_heads (int):
The number of heads of multi head attention.
- linear_units (int):
The number of units of position-wise feed forward.
- num_blocks (int):
The number of decoder blocks.
- dropout_rate (float):
Dropout rate.
- positional_dropout_rate (float):
Dropout rate after adding positional encoding.
- attention_dropout_rate (float):
Dropout rate in attention.
- input_layer (Union[str, nn.Layer]):
Input layer type.
- normalize_before (bool):
Whether to use layer_norm before the first block.
- concat_after (bool):
Whether to concat attention layer's input and output. if True, additional linear will be applied. i.e. x -> x + linear(concat(x, att(x))) if False, no additional linear will be applied. i.e. x -> x + att(x)
- positionwise_layer_type (str):
"linear", "conv1d", or "conv1d-linear".
- positionwise_conv_kernel_size (int):
Kernel size of positionwise conv1d layer.
- macaron_style (bool):
Whether to use macaron style for positionwise layer.
- pos_enc_layer_type (str):
Encoder positional encoding layer type.
- selfattention_layer_type (str):
Encoder attention layer type.
- activation_type (str):
Encoder activation function type.
- use_cnn_module (bool):
Whether to use convolution module.
- zero_triu (bool):
Whether to zero the upper triangular part of attention matrix.
- cnn_module_kernel (int):
Kernerl size of convolution module.
- padding_idx (int):
Padding idx for input_layer=embed.
- stochastic_depth_rate (float):
Maximum probability to skip the encoder layer.
- intermediate_layers (Union[List[int], None]):
indices of intermediate CTC layer. indices start from 1. if not None, intermediate outputs are returned (which changes return type signature.)
Methods
__call__
(*inputs, **kwargs)Call self as a function.
add_parameter
(name, parameter)Adds a Parameter instance.
add_sublayer
(name, sublayer)Adds a sub Layer instance.
apply
(fn)Applies
fn
recursively to every sublayer (as returned by.sublayers()
) as well as self.buffers
([include_sublayers])Returns a list of all buffers from current layer and its sub-layers.
children
()Returns an iterator over immediate children layers.
clear_gradients
()Clear the gradients of all parameters for this layer.
create_parameter
(shape[, attr, dtype, ...])Create parameters for this layer.
create_tensor
([name, persistable, dtype])Create Tensor for this layer.
create_variable
([name, persistable, dtype])Create Tensor for this layer.
eval
()Sets this Layer and all its sublayers to evaluation mode.
extra_repr
()Extra representation of this layer, you can have custom implementation of your own layer.
forward
(xs, masks)Encode input sequence.
full_name
()Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__
get_positionwise_layer
([...])Define positionwise layer.
load_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
named_buffers
([prefix, include_sublayers])Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.
named_children
()Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.
named_parameters
([prefix, include_sublayers])Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.
named_sublayers
([prefix, include_self, ...])Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.
parameters
([include_sublayers])Returns a list of all Parameters from current layer and its sub-layers.
register_buffer
(name, tensor[, persistable])Registers a tensor as buffer into the layer.
register_forward_post_hook
(hook)Register a forward post-hook for Layer.
register_forward_pre_hook
(hook)Register a forward pre-hook for Layer.
set_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
set_state_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
state_dict
([destination, include_sublayers, ...])Get all parameters and persistable buffers of current layer and its sub-layers.
sublayers
([include_self])Returns a list of sub layers.
to
([device, dtype, blocking])Cast the parameters and buffers of Layer by the give device, dtype and blocking.
to_static_state_dict
([destination, ...])Get all parameters and buffers of current layer and its sub-layers.
train
()Sets this Layer and all its sublayers to training mode.
backward
get_embed
get_encoder_selfattn_layer
get_pos_enc_class
register_state_dict_hook
- class paddlespeech.t2s.modules.transformer.encoder.Conv1dResidualBlock(idim: int = 256, odim: int = 256, kernel_size: int = 5, dropout_rate: float = 0.2)[source]
Bases:
Layer
Special module for simplified version of Encoder class.
Methods
__call__
(*inputs, **kwargs)Call self as a function.
add_parameter
(name, parameter)Adds a Parameter instance.
add_sublayer
(name, sublayer)Adds a sub Layer instance.
apply
(fn)Applies
fn
recursively to every sublayer (as returned by.sublayers()
) as well as self.buffers
([include_sublayers])Returns a list of all buffers from current layer and its sub-layers.
children
()Returns an iterator over immediate children layers.
clear_gradients
()Clear the gradients of all parameters for this layer.
create_parameter
(shape[, attr, dtype, ...])Create parameters for this layer.
create_tensor
([name, persistable, dtype])Create Tensor for this layer.
create_variable
([name, persistable, dtype])Create Tensor for this layer.
eval
()Sets this Layer and all its sublayers to evaluation mode.
extra_repr
()Extra representation of this layer, you can have custom implementation of your own layer.
forward
(xs)Encode input sequence. Args: xs (Tensor): Input tensor (#batch, idim, T). Returns: Tensor: Output tensor (#batch, odim, T).
full_name
()Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__
load_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
named_buffers
([prefix, include_sublayers])Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.
named_children
()Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.
named_parameters
([prefix, include_sublayers])Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.
named_sublayers
([prefix, include_self, ...])Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.
parameters
([include_sublayers])Returns a list of all Parameters from current layer and its sub-layers.
register_buffer
(name, tensor[, persistable])Registers a tensor as buffer into the layer.
register_forward_post_hook
(hook)Register a forward post-hook for Layer.
register_forward_pre_hook
(hook)Register a forward pre-hook for Layer.
set_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
set_state_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
state_dict
([destination, include_sublayers, ...])Get all parameters and persistable buffers of current layer and its sub-layers.
sublayers
([include_self])Returns a list of sub layers.
to
([device, dtype, blocking])Cast the parameters and buffers of Layer by the give device, dtype and blocking.
to_static_state_dict
([destination, ...])Get all parameters and buffers of current layer and its sub-layers.
train
()Sets this Layer and all its sublayers to training mode.
backward
register_state_dict_hook
- class paddlespeech.t2s.modules.transformer.encoder.TransformerEncoder(idim, attention_dim: int = 256, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.0, input_layer: str = 'conv2d', pos_enc_layer_type: str = 'abs_pos', normalize_before: bool = True, concat_after: bool = False, positionwise_layer_type: str = 'linear', positionwise_conv_kernel_size: int = 1, selfattention_layer_type: str = 'selfattn', activation_type: str = 'relu', padding_idx: int = -1)[source]
Bases:
BaseEncoder
Transformer encoder module.
- Args:
- idim (int):
Input dimension.
- attention_dim (int):
Dimention of attention.
- attention_heads (int):
The number of heads of multi head attention.
- linear_units (int):
The number of units of position-wise feed forward.
- num_blocks (int):
The number of decoder blocks.
- dropout_rate (float):
Dropout rate.
- positional_dropout_rate (float):
Dropout rate after adding positional encoding.
- attention_dropout_rate (float):
Dropout rate in attention.
- input_layer (Union[str, paddle.nn.Layer]):
Input layer type.
- pos_enc_layer_type (str):
Encoder positional encoding layer type.
- normalize_before (bool):
Whether to use layer_norm before the first block.
- concat_after (bool):
Whether to concat attention layer's input and output. if True, additional linear will be applied. i.e. x -> x + linear(concat(x, att(x))) if False, no additional linear will be applied. i.e. x -> x + att(x)
- positionwise_layer_type (str):
"linear", "conv1d", or "conv1d-linear".
- positionwise_conv_kernel_size (int):
Kernel size of positionwise conv1d layer.
- selfattention_layer_type (str):
Encoder attention layer type.
- activation_type (str):
Encoder activation function type.
- padding_idx (int):
Padding idx for input_layer=embed.
Methods
__call__
(*inputs, **kwargs)Call self as a function.
add_parameter
(name, parameter)Adds a Parameter instance.
add_sublayer
(name, sublayer)Adds a sub Layer instance.
apply
(fn)Applies
fn
recursively to every sublayer (as returned by.sublayers()
) as well as self.buffers
([include_sublayers])Returns a list of all buffers from current layer and its sub-layers.
children
()Returns an iterator over immediate children layers.
clear_gradients
()Clear the gradients of all parameters for this layer.
create_parameter
(shape[, attr, dtype, ...])Create parameters for this layer.
create_tensor
([name, persistable, dtype])Create Tensor for this layer.
create_variable
([name, persistable, dtype])Create Tensor for this layer.
eval
()Sets this Layer and all its sublayers to evaluation mode.
extra_repr
()Extra representation of this layer, you can have custom implementation of your own layer.
forward
(xs, masks[, note_emb, note_dur_emb, ...])Encoder input sequence.
forward_one_step
(xs, masks[, cache])Encode input frame.
full_name
()Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__
get_positionwise_layer
([...])Define positionwise layer.
load_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
named_buffers
([prefix, include_sublayers])Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.
named_children
()Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.
named_parameters
([prefix, include_sublayers])Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.
named_sublayers
([prefix, include_self, ...])Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.
parameters
([include_sublayers])Returns a list of all Parameters from current layer and its sub-layers.
register_buffer
(name, tensor[, persistable])Registers a tensor as buffer into the layer.
register_forward_post_hook
(hook)Register a forward post-hook for Layer.
register_forward_pre_hook
(hook)Register a forward pre-hook for Layer.
set_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
set_state_dict
(state_dict[, use_structured_name])Set parameters and persistable buffers from state_dict.
state_dict
([destination, include_sublayers, ...])Get all parameters and persistable buffers of current layer and its sub-layers.
sublayers
([include_self])Returns a list of sub layers.
to
([device, dtype, blocking])Cast the parameters and buffers of Layer by the give device, dtype and blocking.
to_static_state_dict
([destination, ...])Get all parameters and buffers of current layer and its sub-layers.
train
()Sets this Layer and all its sublayers to training mode.
backward
get_embed
get_encoder_selfattn_layer
get_pos_enc_class
register_state_dict_hook
- forward(xs: Tensor, masks: Tensor, note_emb: Optional[Tensor] = None, note_dur_emb: Optional[Tensor] = None, is_slur_emb: Optional[Tensor] = None, scale: int = 16)[source]
Encoder input sequence.
- Args:
- xs(Tensor):
Input tensor (#batch, time, idim).
- masks(Tensor):
Mask tensor (#batch, 1, time).
- note_emb(Tensor):
Input tensor (#batch, time, attention_dim).
- note_dur_emb(Tensor):
Input tensor (#batch, time, attention_dim).
- is_slur_emb(Tensor):
Input tensor (#batch, time, attention_dim).
- Returns:
- Tensor:
Output tensor (#batch, time, attention_dim).
- Tensor:
Mask tensor (#batch, 1, time).