paddlespeech.t2s.modules.transformer.mask module

Mask module.

paddlespeech.t2s.modules.transformer.mask.subsequent_mask(size, dtype=paddle.bool)[source]

Create mask for subsequent steps (size, size).

Args:
size (int):

size of mask

dtype (paddle.dtype):

result dtype

Return:
Tensor:
>>> subsequent_mask(3)
[[1, 0, 0],
[1, 1, 0],
[1, 1, 1]]
paddlespeech.t2s.modules.transformer.mask.target_mask(ys_in_pad, ignore_id, dtype=paddle.bool)[source]

Create mask for decoder self-attention.

Args:
ys_pad (Tensor):

batch of padded target sequences (B, Lmax)

ignore_id (int):

index of padding

dtype (paddle.dtype):

result dtype

Return:

Tensor: (B, Lmax, Lmax)