paddlespeech.s2t.training.updaters.standard_updater module

class paddlespeech.s2t.training.updaters.standard_updater.StandardUpdater(model: Layer, optimizer: Optimizer, scheduler: LRScheduler, dataloader: DataLoader, init_state: Optional[UpdaterState] = None)[source]

Bases: UpdaterBase

An example of over-simplification. Things may not be that simple, but you can subclass it to fit your need.

Attributes:
updates_per_epoch

Number of steps per epoch, determined by the length of the dataloader.

Methods

new_epoch()

Start a new epoch.

read_batch()

Read a batch from the data loader, auto renew when data is exhausted.

set_state_dict(state_dict)

Set state dict for a Updater.

state_dict()

State dict of a Updater, model, optimizers/schedulers and updater state are included.

update_core(batch)

A simple case for a training step.

load

save

update

new_epoch()[source]

Start a new epoch.

read_batch()[source]

Read a batch from the data loader, auto renew when data is exhausted.

set_state_dict(state_dict)[source]

Set state dict for a Updater. Parameters of models, states for optimizers/schedulers and UpdaterState are restored.

state_dict()[source]

State dict of a Updater, model, optimizers/schedulers and updater state are included.

update()[source]
update_core(batch)[source]

A simple case for a training step. Basic assumptions are: Single model; Single optimizer; Single scheduler, and update learning rate each step; A batch from the dataloader is just the input of the model; The model return a single loss, or a dict containing serval losses. Parameters updates at every batch, no gradient accumulation.

property updates_per_epoch

Number of steps per epoch, determined by the length of the dataloader.