DataLoader
fastai DataLoader Mixin
fastxtend’s DataLoaderMixin
allows adding fastai functionality to non-fastai DataLoaders. DataLoaderMixin
supports batch transforms, one_batch
, show_batch
, and show_results
, although inputs will need to be converted to fastai typed tensors for show methods to work.
For an example of using DataLoaderMixin
, look at the source code for Loader
.
DataLoaderMixin.one_batch
DataLoaderMixin.one_batch ()
Return one processed batch of input(s) and target(s)
DataLoaderMixin.show_batch
DataLoaderMixin.show_batch (b:Optional[Tuple[torch.Tensor,...]]=None, max_n:int=9, ctxs=None, show:bool=True, unique:bool=False, **kwargs)
Show max_n
input(s) and target(s) from the batch.
Type | Default | Details | |
---|---|---|---|
b | Tuple[Tensor, …] | None | None | Batch to show. If None calls one_batch |
max_n | int | 9 | Maximum number of items to show |
ctxs | NoneType | None | List of ctx objects to show data. Could be matplotlib axis, DataFrame etc |
show | bool | True | If False, return decoded batch instead of showing |
unique | bool | False | Whether to show only one |
kwargs |
DataLoaderMixin.show_results
DataLoaderMixin.show_results (b, out, max_n:int=9, ctxs=None, show:bool=True, **kwargs)
Show max_n
results with input(s), target(s) and prediction(s).
Type | Default | Details | |
---|---|---|---|
b | Batch to show results for | ||
out | Predicted output from model for the batch | ||
max_n | int | 9 | Maximum number of items to show |
ctxs | NoneType | None | List of ctx objects to show data. Could be matplotlib axis, DataFrame etc |
show | bool | True | If False, return decoded batch instead of showing |
kwargs |
DataLoaderMixin.to
DataLoaderMixin.to (device:Union[int,str,torch.device])
Sets self.device=device
.
DataLoaderMixin.n_inp
DataLoaderMixin.n_inp ()
Number of elements in a batch for model input
DataLoaderMixin.split_idx
DataLoaderMixin.split_idx ()
DataLoaderMixin.decode
DataLoaderMixin.decode (b:Tuple[torch.Tensor,...])
Decode batch b
DataLoaderMixin.decode_batch
DataLoaderMixin.decode_batch (b:Tuple[torch.Tensor,...], max_n:int=9)
Decode up to max_n
input(s) from batch b