zhengbochao 6c7f7663e1 Initial commit | 2 週間 前 | |
---|---|---|
.. | ||
README.md | 2 週間 前 | |
__init__.py | 2 週間 前 | |
distributed_data_parallel.py | 2 週間 前 | |
run.sh | 2 週間 前 |
distributed_data_parallel.py and run.sh show an example using Amp with
apex.parallel.DistributedDataParallel or
torch.nn.parallel.DistributedDataParallel
and the Pytorch multiprocess launcher script,
torch.distributed.launch.
The use of Amp
with DistributedDataParallel does not need to change from ordinary
single-process use. The only gotcha is that wrapping your model with DistributedDataParallel
must
come after the call to amp.initialize
. Test via
bash run.sh
This is intended purely as an instructional example, not a performance showcase.