paddlespeech.s2t.training.scheduler module
- class paddlespeech.s2t.training.scheduler.LRSchedulerFactory[source]
Bases:
object
Methods
from_args
- class paddlespeech.s2t.training.scheduler.WarmupLR(warmup_steps: Union[int, float] = 25000, learning_rate=1.0, last_epoch=-1, verbose=False, **kwargs)[source]
Bases:
LRScheduler
The WarmupLR scheduler This scheduler is almost same as NoamLR Scheduler except for following difference: NoamLR:
- lr = optimizer.lr * model_size ** -0.5
min(step ** -0.5, step * warmup_step ** -1.5)
- WarmupLR:
- lr = optimizer.lr * warmup_step ** 0.5
min(step ** -0.5, step * warmup_step ** -1.5)
Note that the maximum lr equals to optimizer.lr in this scheduler.
Methods
__call__
()Return lastest computed learning rate on current epoch.
get_lr
()For those subclass who overload
LRScheduler
(Base Class), User should have a custom implementation ofget_lr()
.set_dict
(state_dict)Loads the schedulers state.
set_state_dict
(state_dict)Loads the schedulers state.
set_step
([step])It will update the learning rate in optimizer according to current
epoch
.state_dict
()Returns the state of the scheduler as a
dict
.state_keys
()For those subclass who overload
LRScheduler
(Base Class).step
([epoch])step
should be called afteroptimizer.step
.