paddlespeech.s2t.modules.crf module

class paddlespeech.s2t.modules.crf.CRF(nb_labels: int, bos_tag_id: int, eos_tag_id: int, pad_tag_id: Optional[int] = None, batch_first: bool = True)[source]

Bases: Layer

Linear-chain Conditional Random Field (CRF).

Args:

nb_labels (int): number of labels in your tagset, including special symbols. bos_tag_id (int): integer representing the beginning of sentence symbol in

your tagset.

eos_tag_id (int): integer representing the end of sentence symbol in your tagset. pad_tag_id (int, optional): integer representing the pad symbol in your tagset.

If None, the model will treat the PAD as a normal tag. Otherwise, the model will apply constraints for PAD transitions.

batch_first (bool): Whether the first dimension represents the batch dimension.

Methods

__call__(*inputs, **kwargs)

Call self as a function.

add_parameter(name, parameter)

Adds a Parameter instance.

add_sublayer(name, sublayer)

Adds a sub Layer instance.

apply(fn)

Applies fn recursively to every sublayer (as returned by .sublayers()) as well as self.

buffers([include_sublayers])

Returns a list of all buffers from current layer and its sub-layers.

children()

Returns an iterator over immediate children layers.

clear_gradients()

Clear the gradients of all parameters for this layer.

create_parameter(shape[, attr, dtype, ...])

Create parameters for this layer.

create_tensor([name, persistable, dtype])

Create Tensor for this layer.

create_variable([name, persistable, dtype])

Create Tensor for this layer.

decode(emissions[, mask])

Find the most probable sequence of labels given the emissions using the Viterbi algorithm.

eval()

Sets this Layer and all its sublayers to evaluation mode.

extra_repr()

Extra representation of this layer, you can have custom implementation of your own layer.

forward(emissions, tags[, mask])

Compute the negative log-likelihood.

full_name()

Full name for this layer, composed by name_scope + "/" + MyLayer.__class__.__name__

load_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

log_likelihood(emissions, tags[, mask])

Compute the probability of a sequence of tags given a sequence of emissions scores.

named_buffers([prefix, include_sublayers])

Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

named_children()

Returns an iterator over immediate children layers, yielding both the name of the layer as well as the layer itself.

named_parameters([prefix, include_sublayers])

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

named_sublayers([prefix, include_self, ...])

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer.

parameters([include_sublayers])

Returns a list of all Parameters from current layer and its sub-layers.

register_buffer(name, tensor[, persistable])

Registers a tensor as buffer into the layer.

register_forward_post_hook(hook)

Register a forward post-hook for Layer.

register_forward_pre_hook(hook)

Register a forward pre-hook for Layer.

set_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

set_state_dict(state_dict[, use_structured_name])

Set parameters and persistable buffers from state_dict.

state_dict([destination, include_sublayers, ...])

Get all parameters and persistable buffers of current layer and its sub-layers.

sublayers([include_self])

Returns a list of sub layers.

to([device, dtype, blocking])

Cast the parameters and buffers of Layer by the give device, dtype and blocking.

to_static_state_dict([destination, ...])

Get all parameters and buffers of current layer and its sub-layers.

train()

Sets this Layer and all its sublayers to training mode.

backward

init_weights

register_state_dict_hook

decode(emissions, mask=None)[source]

Find the most probable sequence of labels given the emissions using the Viterbi algorithm.

Args:
emissions (paddle.Tensor): Sequence of emissions for each label.

Shape (batch_size, seq_len, nb_labels) if batch_first is True, (seq_len, batch_size, nb_labels) otherwise.

mask (paddle.FloatTensor, optional): Tensor representing valid positions.

If None, all positions are considered valid. Shape (batch_size, seq_len) if batch_first is True, (seq_len, batch_size) otherwise.

Returns:
paddle.Tensor: the viterbi score for the for each batch.

Shape of (batch_size,)

list of lists: the best viterbi sequence of labels for each batch. [B, T]

forward(emissions: Tensor, tags: Tensor, mask: Optional[Tensor] = None) Tensor[source]

Compute the negative log-likelihood. See log_likelihood method.

init_weights()[source]
log_likelihood(emissions, tags, mask=None)[source]

Compute the probability of a sequence of tags given a sequence of emissions scores.

Args:
emissions (paddle.Tensor): Sequence of emissions for each label.

Shape of (batch_size, seq_len, nb_labels) if batch_first is True, (seq_len, batch_size, nb_labels) otherwise.

tags (paddle.LongTensor): Sequence of labels.

Shape of (batch_size, seq_len) if batch_first is True, (seq_len, batch_size) otherwise.

mask (paddle.FloatTensor, optional): Tensor representing valid positions.

If None, all positions are considered valid. Shape of (batch_size, seq_len) if batch_first is True, (seq_len, batch_size) otherwise.

Returns:
paddle.Tensor: sum of the log-likelihoods for each sequence in the batch.

Shape of ()