with less_random():
= plt.subplots(1,3,figsize=(20,4))
_,axs for ax, mode in zip(axs, [GrayscaleMode.Luma601, GrayscaleMode.Luma709, GrayscaleMode.Average]):
= partial(Grayscale(p=1, mode=mode), split_idx=0)
f =ax, title=f'mode={mode}') f(_batch(img.clone())).squeeze().show(ctx
Additional Batch Augmentations
GrayscaleMode
GrayscaleMode (value, names=None, module=None, qualname=None, type=None, start=1)
GrayscaleModes for Grayscale
Grayscale
Grayscale (p:float=0.1, mode:GrayscaleMode=<GrayscaleMode.Random: 3>)
Convert RGB image into grayscale using luma_bt.601, luma_bt.709, averaging, or randomly selected
Type | Default | Details | |
---|---|---|---|
p | float | 0.1 | Per-item probability |
mode | GrayscaleMode | GrayscaleMode.Random | GrayScaleMode to apply to images. Random applies all three element-wise with equal probability |
ChannelDrop
ChannelDrop (p:float=0.1, replace:float|None=None)
Drop entire channel by replacing it with random solid value [0,1)
Type | Default | Details | |
---|---|---|---|
p | float | 0.1 | Per-item probability |
replace | float | None | None | Set constant replacement value. Defaults to element-wise random value [0,1) |
with less_random():
= plt.subplots(1,3,figsize=(20,4))
_,axs = ChannelDrop(p=1)
f for ax in axs: f(_batch(img.clone()), split_idx=0).squeeze().show(ctx=ax)
RandomNoise
RandomNoise (p:float=0.25, stdev:float|tuple=(0.1, 0.25), random:bool=True)
Add random guassian noise based on stdev
Type | Default | Details | |
---|---|---|---|
p | float | 0.25 | Per-item probability |
stdev | float | tuple | (0.1, 0.25) | Maximum or range of the standard deviation of added noise |
random | bool | True | Randomize standard deviation of added noise between [stdev[0] , stdev[1] ) |
Larger images can use higher stdev
as seen with this 600x400 pixel example:
with less_random():
= plt.subplots(1,4,figsize=(20,4))
_,axs for ax, stdev in zip(axs, [0.1, 0.25, 0.5, 0.75]):
= partial(RandomNoise(p=1, stdev=stdev, random=False), split_idx=0)
f =ax, title=f'stdev={stdev}') norm_apply_denorm(_batch(img.clone()), f, nrm).squeeze().show(ctx
But smaller images should use lower stdev
as seen with this 150x100 pixel example:
with less_random():
= plt.subplots(1,4,figsize=(20,4))
_,axs = Resize((100, 150))
r for ax, stdev in zip(axs, [0.1, 0.2, 0.3, 0.5]):
= partial(RandomNoise(p=1, stdev=stdev, random=False), split_idx=0)
f =ax, title=f'stdev={stdev}') norm_apply_denorm(_batch(r(img.clone(), )), f, nrm).squeeze().show(ctx
RandomErasingBatch
RandomErasingBatch (p:float=0.25, sl:float=0.0, sh:float=0.3, min_aspect:float=0.3, max_count:int=1, element:bool=False)
Randomly selects a rectangle region in an image and randomizes its pixels.
Type | Default | Details | |
---|---|---|---|
p | float | 0.25 | Per-item probability |
sl | float | 0.0 | Minimum proportion of erased area |
sh | float | 0.3 | Maximum proportion of erased area |
min_aspect | float | 0.3 | Minimum aspect ratio of erased area |
max_count | int | 1 | Maximum number of erasing blocks per image, area per box is scaled by count |
element | bool | False | Loop over through batch and apply element-wise unique erasing |
with less_random():
= plt.subplots(1,4,figsize=(18,4))
_,axs for ax, area in zip(axs, [0.05, 0.1, 0.2, 0.3]):
= partial(RandomErasingBatch(p=1, sl=area, sh=area), split_idx=0)
f =ax, title=f'area={area}') norm_apply_denorm(_batch(img.clone()), f, nrm).squeeze().show(ctx
affine_transforms
affine_transforms (mult:float=1.0, do_flip:bool=True, flip_vert:bool=False, max_rotate:float=10.0, min_zoom:float=1.0, max_zoom:float=1.1, max_warp:float=0.2, p_affine:float=0.75, xtra_tfms:list=None, size:Union[int,tuple]=None, mode:str='bilinear', pad_mode='reflection', align_corners=True, batch=False, min_scale=1.0)
Utility function to easily create a list of affine transforms: flip, rotate, zoom, and warp.
Type | Default | Details | |
---|---|---|---|
mult | float | 1.0 | Multiplication applying to max_rotate ,max_warp |
do_flip | bool | True | Random flipping |
flip_vert | bool | False | Flip vertically and horizontally |
max_rotate | float | 10.0 | Maximum degree of rotation |
min_zoom | float | 1.0 | Minimum zoom |
max_zoom | float | 1.1 | Maximum zoom |
max_warp | float | 0.2 | Maximum warp |
p_affine | float | 0.75 | Probability of applying affine transformation |
xtra_tfms | list | None | Custom Transformations |
size | int | tuple | None | Output size, duplicated if one value is specified |
mode | str | bilinear | PyTorch F.grid_sample interpolation |
pad_mode | str | reflection | A PadMode |
align_corners | bool | True | PyTorch F.grid_sample align_corners |
batch | bool | False | Apply identical transformation to entire batch |
min_scale | float | 1.0 | Minimum scale of the crop, in relation to image area |
affine_transforms
identical to fastai.vision.augmentation.aug_transforms
, except with the lighting transforms removed. It’s intended for use with the fastai+FFCV Loader
, using FFCV Numba transforms for lighting.