paddlespeech.t2s.models.wavernn.wavernn_updater module

class paddlespeech.t2s.models.wavernn.wavernn_updater.WaveRNNEvaluator(model: Layer, criterion: Layer, dataloader: Optimizer, output_dir: Optional[Path] = None, valid_generate_loader=None, config=None)[source]

Bases: StandardEvaluator

Attributes:
name

Methods

__call__([trainer])

Main action of the extention.

finalize(trainer)

Action that is executed when training is done.

initialize(trainer)

Action that is executed once to get the corect trainer state.

on_error(trainer, exc, tb)

Handles the error raised during training before finalization.

evaluate

evaluate_core

gen_valid_samples

evaluate_core(batch)[source]
gen_valid_samples()[source]
class paddlespeech.t2s.models.wavernn.wavernn_updater.WaveRNNUpdater(model: Layer, optimizer: Optimizer, criterion: Layer, dataloader: DataLoader, init_state=None, output_dir: Optional[Path] = None, mode='RAW')[source]

Bases: StandardUpdater

Attributes:
updates_per_epoch

Number of updater per epoch, determined by the length of the dataloader.

Methods

new_epoch()

Start a new epoch.

read_batch()

Read a batch from the data loader, auto renew when data is exhausted.

set_state_dict(state_dict)

Set state dict for a Updater.

state_dict()

State dict of a Updater, model, optimizer and updater state are included.

update_core(batch)

A simple case for a training step.

load

save

update

update_core(batch)[source]

A simple case for a training step. Basic assumptions are: Single model; Single optimizer; A batch from the dataloader is just the input of the model; The model return a single loss, or a dict containing serval losses. Parameters updates at every batch, no gradient accumulation.

paddlespeech.t2s.models.wavernn.wavernn_updater.calculate_grad_norm(parameters, norm_type: str = 2)[source]

calculate grad norm of mdoel's parameters parameters:

model's parameters

norm_type: str Returns ------------ Tensor

grad_norm