A collection of utility methods.


 free_gpu_memory (learn:fastai.learner.Learner,

Frees GPU memory using gc.collect and torch.cuda.empty_cache


 less_random (seed=42, deterministic=True)

Stores and retrieves state of random number generators. Sets random seed for random, torch, and numpy. Does not set torch.backends.cudnn.benchmark = False

A random state manager which provides some reproducibility without sacrificing potential training speed.

Unlike fastai.torch_core.no_random, less_random does not set torch.backends.cudnn.benchmark = False. This allows PyTorch to select the fastest Cuda kernels and potentially train faster than no_random.

less_random training runs on the same GPU, PyTorch, & Cuda setup should be close to no_random reproducibility, but different hardware/software setup will have less reproducibility than using no_random.


 scale_time (val:float, spec:str='#0.4G')

Scale fractional second time values and return formatted to spec