Automatic Mixed Precision
With supported hardware, fastxtend supports training in both float16 and bfloat16 automatic mixed precision.
For details on float16 mixed precision training, please see the fastai mixed precision documentation.
BF16 mixed precision support has been upstreamed into fastai 2.7.13.
MixedPrecision Callback
fastxtend’s MixedPrecision
is a drop in replacement for fastai.callback.fp16.MixedPrecision
for float16 mixed precision. Set amp_mode
to train in bfloat16 mixed precision or use Learner.to_bf16
.
AMPMode
AMPMode (value, names=None, module=None, qualname=None, type=None, start=1)
Automatic mixed precision modes for ease of completion
MixedPrecision
MixedPrecision (amp_mode:str|AMPMode=<AMPMode.FP16: 'fp16'>, init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True)
Mixed precision training using Pytorch’s Automatic Mixed Precision (AMP)
Type | Default | Details | |
---|---|---|---|
amp_mode | str | AMPMode | AMPMode.FP16 | Mixed Precision training mode. Supports fp16 and bf16. |
init_scale | float | 65536.0 | |
growth_factor | float | 2.0 | |
backoff_factor | float | 0.5 | |
growth_interval | int | 2000 | |
enabled | bool | True |
amp_mode
accepts both AMPMode
enums and ‘fp16’ or ‘bf16’ strings.
Passing GradScaler
arguments to MixedPrecision
when training in bfloat16 has no effect, as bfloat16 mixed precision does not used a gradient scaler.
Convenience Methods
In addition to the fastai convenience methods, fastxtend adds Learner.to_bf16
for training in bfloat16 mixed precision.
Learner.to_fp16
Learner.to_fp16 (init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True)
Set Learner
to float16 mixed precision using PyTorch AMP
Learner.to_bf16
Learner.to_bf16 ()
Set Learner
to bfloat16 mixed precision using PyTorch AMP
Learner.to_fp32
Learner.to_fp32 ()
Set Learner
to float32 precision