paddlespeech.s2t.training.scheduler module

class paddlespeech.s2t.training.scheduler.LRSchedulerFactory[source]

Bases: object

Methods

from_args

classmethod from_args(name: str, args: Dict[str, Any])[source]
class paddlespeech.s2t.training.scheduler.WarmupLR(warmup_steps: Union[int, float] = 25000, learning_rate=1.0, last_epoch=-1, verbose=False, **kwargs)[source]

Bases: LRScheduler

The WarmupLR scheduler This scheduler is almost same as NoamLR Scheduler except for following difference: NoamLR:

lr = optimizer.lr * model_size ** -0.5
  • min(step ** -0.5, step * warmup_step ** -1.5)

WarmupLR:
lr = optimizer.lr * warmup_step ** 0.5
  • min(step ** -0.5, step * warmup_step ** -1.5)

Note that the maximum lr equals to optimizer.lr in this scheduler.

Methods

__call__()

Return lastest computed learning rate on current epoch.

get_lr()

For those subclass who overload LRScheduler (Base Class), User should have a custom implementation of get_lr() .

set_dict(state_dict)

Loads the schedulers state.

set_state_dict(state_dict)

Loads the schedulers state.

set_step([step])

It will update the learning rate in optimizer according to current epoch .

state_dict()

Returns the state of the scheduler as a dict.

state_keys()

For those subclass who overload LRScheduler (Base Class).

step([epoch])

step should be called after optimizer.step .

get_lr()[source]

For those subclass who overload LRScheduler (Base Class), User should have a custom implementation of get_lr() .

Otherwise, an NotImplementedError exception will be thrown.

set_step(step: Optional[int] = None)[source]

It will update the learning rate in optimizer according to current epoch . The new learning rate will take effect on next optimizer.step .

Args:

step (int, None): specify current epoch. Default: None. Auto-increment from last_epoch=-1.

Returns:

None