Skip to content
This repository has been archived by the owner on Aug 10, 2023. It is now read-only.

v0.3.0

Pre-release
Pre-release
Compare
Choose a tag to compare
@hfxunlp hfxunlp released this 31 Aug 03:57
· 22 commits to master since this release

In this release, we:
Move AMP support from apex to torch.cuda.amp introduced in PyTorch 1.6;
Support sampling during greedy decode (for back-translation);
Accelerate Average Attention Network by replacing the matrix multiplication with cumsum; (A typo in this release is fixed in commit ed5eb60)
Add APE support;
Support the Mish activation function.