zhengbochao 6c7f7663e1 Initial commit | 1 tydzień temu | |
---|---|---|
.. | ||
.github | 1 tydzień temu | |
apex | 1 tydzień temu | |
csrc | 1 tydzień temu | |
docs | 1 tydzień temu | |
examples | 1 tydzień temu | |
tests | 1 tydzień temu | |
.gitignore | 1 tydzień temu | |
.gitmodules | 1 tydzień temu | |
.nojekyll | 1 tydzień temu | |
LICENSE | 1 tydzień temu | |
README.md | 1 tydzień temu | |
__init__.py | 1 tydzień temu | |
pyproject.toml | 1 tydzień temu | |
requirements.txt | 1 tydzień temu | |
requirements_dev.txt | 1 tydzień temu | |
setup.py | 1 tydzień temu |
This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intent of Apex is to make up-to-date utilities available to users as quickly as possible.
Deprecated. Use PyTorch AMP
apex.amp
is a tool to enable mixed precision training by changing only 3 lines of your script.
Users can easily experiment with different pure and mixed precision training modes by supplying
different flags to amp.initialize
.
Webinar introducing Amp
(The flag cast_batchnorm
has been renamed to keep_batchnorm_fp32
).
Comprehensive Imagenet example
Moving to the new Amp API (for users of the deprecated "Amp" and "FP16_Optimizer" APIs)
apex.parallel.DistributedDataParallel
is deprecated. Use torch.nn.parallel.DistributedDataParallel
apex.parallel.DistributedDataParallel
is a module wrapper, similar to
torch.nn.parallel.DistributedDataParallel
. It enables convenient multiprocess distributed training,
optimized for NVIDIA's NCCL communication library.
The Imagenet example
shows use of apex.parallel.DistributedDataParallel
along with apex.amp
.
Deprecated. Use torch.nn.SyncBatchNorm
apex.parallel.SyncBatchNorm
extends torch.nn.modules.batchnorm._BatchNorm
to
support synchronized BN.
It allreduces stats across processes during multiprocess (DistributedDataParallel) training.
Synchronous BN has been used in cases where only a small
local minibatch can fit on each GPU.
Allreduced stats increase the effective batch size for the BN layer to the
global batch size across all processes (which, technically, is the correct
formulation).
Synchronous BN has been observed to improve converged accuracy in some of our research models.
To properly save and load your amp
training, we introduce the amp.state_dict()
, which contains all loss_scalers
and their corresponding unskipped steps,
as well as amp.load_state_dict()
to restore these attributes.
In order to get bitwise accuracy, we recommend the following workflow:
# Initialization
opt_level = 'O1'
model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
# Train your model
...
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
...
# Save checkpoint
checkpoint = {
'model': model.state_dict(),
'optimizer': optimizer.state_dict(),
'amp': amp.state_dict()
}
torch.save(checkpoint, 'amp_checkpoint.pt')
...
# Restore
model = ...
optimizer = ...
checkpoint = torch.load('amp_checkpoint.pt')
model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
amp.load_state_dict(checkpoint['amp'])
# Continue training
...
Note that we recommend restoring the model using the same opt_level
. Also note that we recommend calling the load_state_dict
methods after amp.initialize
.
Each apex.contrib
module requires one or more install options other than --cpp_ext
and --cuda_ext
.
Note that contrib modules do not necessarily support stable PyTorch releases.
NVIDIA PyTorch Containers are available on NGC: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch. The containers come with all the custom extensions available at the moment.
See the NGC documentation for details such as:
To install Apex from source, we recommend using the nightly Pytorch obtainable from https://github.com/pytorch/pytorch.
The latest stable release obtainable from https://pytorch.org should also work.
We recommend installing Ninja
to make compilation faster.
For performance and full functionality, we recommend installing Apex with CUDA and C++ extensions via
git clone https://github.com/NVIDIA/apex
cd apex
# if pip >= 23.1 (ref: https://pip.pypa.io/en/stable/news/#v23-1) which supports multiple `--config-settings` with the same key...
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
# otherwise
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./
APEX also supports a Python-only build via
pip install -v --disable-pip-version-check --no-build-isolation --no-cache-dir ./
A Python-only build omits:
apex.optimizers.FusedAdam
.apex.normalization.FusedLayerNorm
and apex.normalization.FusedRMSNorm
.apex.parallel.SyncBatchNorm
.apex.parallel.DistributedDataParallel
and apex.amp
.
DistributedDataParallel
, amp
, and SyncBatchNorm
will still be usable, but they may be slower.pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" .
may work if you were able to build Pytorch from source
on your system. A Python-only build via pip install -v --no-cache-dir .
is more likely to work.
If you installed Pytorch in a Conda environment, make sure to install Apex in that same environment.
If a requirement of a module is not met, then it will not be built.
Module Name | Install Option | Misc |
---|---|---|
apex_C |
--cpp_ext |
|
amp_C |
--cuda_ext |
|
syncbn |
--cuda_ext |
|
fused_layer_norm_cuda |
--cuda_ext |
apex.normalization |
mlp_cuda |
--cuda_ext |
|
scaled_upper_triang_masked_softmax_cuda |
--cuda_ext |
|
generic_scaled_masked_softmax_cuda |
--cuda_ext |
|
scaled_masked_softmax_cuda |
--cuda_ext |
|
fused_weight_gradient_mlp_cuda |
--cuda_ext |
Requires CUDA>=11 |
permutation_search_cuda |
--permutation_search |
apex.contrib.sparsity |
bnp |
--bnp |
apex.contrib.groupbn |
xentropy |
--xentropy |
apex.contrib.xentropy |
focal_loss_cuda |
--focal_loss |
apex.contrib.focal_loss |
fused_index_mul_2d |
--index_mul_2d |
apex.contrib.index_mul_2d |
fused_adam_cuda |
--deprecated_fused_adam |
apex.contrib.optimizers |
fused_lamb_cuda |
--deprecated_fused_lamb |
apex.contrib.optimizers |
fast_layer_norm |
--fast_layer_norm |
apex.contrib.layer_norm . different from fused_layer_norm |
fmhalib |
--fmha |
apex.contrib.fmha |
fast_multihead_attn |
--fast_multihead_attn |
apex.contrib.multihead_attn |
transducer_joint_cuda |
--transducer |
apex.contrib.transducer |
transducer_loss_cuda |
--transducer |
apex.contrib.transducer |
cudnn_gbn_lib |
--cudnn_gbn |
Requires cuDNN>=8.5, apex.contrib.cudnn_gbn |
peer_memory_cuda |
--peer_memory |
apex.contrib.peer_memory |
nccl_p2p_cuda |
--nccl_p2p |
Requires NCCL >= 2.10, apex.contrib.nccl_p2p |
fast_bottleneck |
--fast_bottleneck |
Requires peer_memory_cuda and nccl_p2p_cuda , apex.contrib.bottleneck |
fused_conv_bias_relu |
--fused_conv_bias_relu |
Requires cuDNN>=8.4, apex.contrib.conv_bias_relu |