Skip to content

Commit a7e5cee

Browse files
khoa-hovince62s
authored andcommitted
Clarify mixed precision training support (#1458)
Change the wording to avoid confusion. Mixed precision ensures both higher arithmetic throughput and numerical stability, not exactly synonymous to pure half-precision/FP16 training. Also add mentioning of tensor cores since older generation GPUs without tensor cores don't support true mixed precision training.
1 parent 065c99f commit a7e5cee

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Note that we currently only support PyTorch 1.1 (should work with 1.0)
5757
- Inference time loss functions.
5858
- [Conv2Conv convolution model]
5959
- SRU "RNNs faster than CNN" paper
60-
- FP16 training (mixed-precision with Apex)
60+
- Mixed-precision training with [APEX](https://github.com/NVIDIA/apex), optimized on [Tensor Cores](https://developer.nvidia.com/tensor-cores)
6161

6262
## Quickstart
6363

0 commit comments

Comments
 (0)