A collection of utility methods.


free_gpu_memory(learn:Learner, dls:DataLoaders=None)

Frees GPU memory using gc.collect and torch.cuda.empty_cache


less_random(seed=42, deterministic=True)

Stores and retrieves state of random number generators. Sets random seed for random, torch, and numpy. Does not set torch.backends.cudnn.benchmark = False

A random state manager which provides some reproducibility without sacrificing potential training speed.

Unlike fast.ai's no_random, less_random does not set torch.backends.cudnn.benchmark = False so it's possible to train faster. Training runs on the same GPU, PyTorch, & CUDA setup should be reproducible, but different hardware/software setup will probably have less reproducibility then using no_random.