Vision Data

Additional helper functions to get data in a DataLoaders in vision applications

Batch Transforms on CPU

With enough DataLoader workers and fast CPU cores, using fastai’s GPU augmentations can lead to slower training then using CPU augmentations.

ImageCPUBlock and MaskCPUBlock are convenience methods for executing fastai batch transforms on CPU.


source

PreBatchAsItem

 PreBatchAsItem (enc=None, dec=None, split_idx=None, order=None)

Converts Tensor from CHW to BCHW by adding a fake B dim


source

PostBatchAsItem

 PostBatchAsItem (enc=None, dec=None, split_idx=None, order=None)

Converts Tensor from BCHW to CHW by removing the fake B dim

PyTorch 2D image transforms expect tensors to be in BCHW format. PreBatchAsItem and PostBatchAsItem add and remove a B dim to a single Tensor Image so fastai batch transforms will work on individual items.


source

ImageCPUBlock

 ImageCPUBlock (cls:fastai.vision.core.PILBase=<class
                'fastai.vision.core.PILImage'>)

A TransformBlock for images of cls for running batch_tfms on CPU


source

MaskCPUBlock

 MaskCPUBlock (codes:Union[Iterable[~T],MutableSequence[~T],fastcore.found
               ation.L,fastcore.basics.fastuple,NoneType]=None)

A TransformBlock for segmentation masks, potentially with codes, for running batch_tfms on CPU

Batch Transforms on CPU Example

with no_random():
    imagenette = untar_data(URLs.IMAGENETTE_320)

    dblock = DataBlock(blocks=(ImageCPUBlock, CategoryBlock),
                        splitter=GrandparentSplitter(valid_name='val'),
                        get_items=get_image_files, get_y=parent_label,
                        item_tfms=aug_transforms(size=128),
                        batch_tfms=Normalize.from_stats(*imagenet_stats))
    dls = dblock.dataloaders(imagenette, bs=9, num_workers=num_cpus())

    dls.show_batch()