paddlespeech.s2t.models.lm.transformer module

class paddlespeech.s2t.models.lm.transformer.TransformerLM(n_vocab: int, pos_enc: Optional[str] = None, embed_unit: int = 128, att_unit: int = 256, head: int = 2, unit: int = 1024, layer: int = 4, dropout_rate: float = 0.5, emb_dropout_rate: float = 0.0, att_dropout_rate: float = 0.0, tie_weights: bool = False, **kwargs)[source]

Bases: Layer, LMInterface, BatchScorerInterface

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_arguments(parser)

Add arguments to command line argument parser.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

batch_init_state(x)

Get an initial state for decoding (optional).

batch_score(ys, states, xs)

Score new token batch (required).

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

build(n_vocab, **kwargs)

Initialize this class with python-level args.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

final_score(state)

Score eos (optional).

forward(x, t)

Compute LM loss value from buffer sequences.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

init_state(x)

Get an initial state for decoding (optional).

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

score(y, state, x)

Score new token.

select_state(state, i[, new_id])

Select state with relative ids in the main beam search.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

register_state_dict_hook

batch_score(ys: Tensor, states: List[Any], xs: Tensor) Tuple[Tensor, List[Any]][source]

Score new token batch (required).

Args:

ys (paddle.Tensor): paddle.int64 prefix tokens (n_batch, ylen). states (List[Any]): Scorer states for prefix tokens. xs (paddle.Tensor):

The encoder feature that generates ys (n_batch, xlen, n_feat).

Returns:
tuple[paddle.Tensor, List[Any]]: Tuple of

batchfied scores for next token with shape of (n_batch, n_vocab) and next state list for ys.

forward(x: Tensor, t: Tensor) Tuple[Tensor, Tensor, Tensor][source]

Compute LM loss value from buffer sequences.

Args:

x (paddle.Tensor): Input ids. (batch, len) t (paddle.Tensor): Target ids. (batch, len)

Returns:
tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]: Tuple of

loss to backward (scalar), negative log-likelihood of t: -log p(t) (scalar) and the number of elements in x (scalar)

Notes:

The last two return values are used in perplexity: p(t)^{-n} = exp(-log p(t) / n)

score(y: Tensor, state: Any, x: Tensor) Tuple[Tensor, Any][source]

Score new token.

Args:

y (paddle.Tensor): 1D paddle.int64 prefix tokens. state: Scorer state for prefix tokens x (paddle.Tensor): encoder feature that generates ys.

Returns:
tuple[paddle.Tensor, Any]: Tuple of

paddle.float32 scores for next token (n_vocab) and next state for ys