Utility
free_gpu_memory
free_gpu_memory (learn:fastai.learner.Learner, dls:fastai.data.core.DataLoaders=None)
Frees GPU memory using gc.collect
and torch.cuda.empty_cache
less_random
less_random (seed:int=42, deterministic:Optional[bool]=None, benchmark:Optional[bool]=None)
Stores and retrieves state of random number generators. Sets random seed for random
, torch
, and numpy
.
Does not set torch.backends.cudnn.benchmark
or torch.backends.cudnn.deterministic
by default.
Type | Default | Details | |
---|---|---|---|
seed | int | 42 | Seed for random , torch , and numpy |
deterministic | bool | None | None | Set torch.backends.cudnn.deterministic if not None |
benchmark | bool | None | None | Set torch.backends.cudnn.benchmark if not None |
A random state manager which provides some reproducibility without sacrificing potential training speed.
Unlike fastai.torch_core.no_random
, less_random
does not set torch.backends.cudnn.benchmark
or torch.backends.cudnn.deterministic
by default.
less_random
training runs on the same GPU, PyTorch, & Cuda setup should be close to no_random
reproducibility, but different hardware/software setup will have less reproducibility than using no_random
.
scale_time
scale_time (val:float, spec:str='#0.4G')
Scale fractional second time
values and return formatted to spec
pil_to_numpy
pil_to_numpy (img:PIL.Image.Image)
Fast conversion of Pillow Image
to NumPy NDArray
convert_to_int
convert_to_int (s)