XResNet Extended

fastai’s XResNet with more flexibility.

fastxtend’s XResNet is backwards compatible with fastai.vision.models.xresnet.XResNet.

It adds the following features to XResNet:

ResNet Blocks


source

ResBlock

 ResBlock (expansion, ni, nf, stride=1, groups=1, attn_mod=None, nh1=None,
           nh2=None, dw=False, g2=1, sa=False, sym=False,
           norm_type=<NormType.Batch: 1>, act_cls=<class
           'torch.nn.modules.activation.ReLU'>, ndim=2, ks=3,
           block_pool=<function AvgPool>, pool_first=True, stoch_depth=0,
           padding=None, bias=None, bn_1st=True, transpose=False,
           init='auto', xtra=None, bias_std=0.01,
           dilation:Union[int,Tuple[int,int]]=1, padding_mode:str='zeros',
           device=None, dtype=None)

Resnet block from ni to nh with stride


source

ResNeXtBlock

 ResNeXtBlock (expansion, ni, nf, groups=32, stride=1, base_width=4,
               attn_mod=None, nh1=None, nh2=None, dw=False, g2=1,
               sa=False, sym=False, norm_type=<NormType.Batch: 1>,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, block_pool=<function AvgPool>, pool_first=True,
               stoch_depth=0, padding=None, bias=None, bn_1st=True,
               transpose=False, init='auto', xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
expansion
ni
nf
groups int 32
stride int 1
base_width int 4
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sa bool False
sym bool False
norm_type NormType NormType.Batch
act_cls type ReLU
ndim int 2
ks int 3
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

Squeeze & Excitation Blocks


source

SEBlock

 SEBlock (expansion, ni, nf, groups=1, se_reduction=16, stride=1,
          se_act_cls=<class 'torch.nn.modules.activation.ReLU'>,
          attn_mod=None, nh1=None, nh2=None, dw=False, g2=1, sa=False,
          sym=False, norm_type=<NormType.Batch: 1>, act_cls=<class
          'torch.nn.modules.activation.ReLU'>, ndim=2, ks=3,
          block_pool=<function AvgPool>, pool_first=True, stoch_depth=0,
          padding=None, bias=None, bn_1st=True, transpose=False,
          init='auto', xtra=None, bias_std=0.01,
          dilation:Union[int,Tuple[int,int]]=1, padding_mode:str='zeros',
          device=None, dtype=None)

A Squeeze and Excitation XResNet Block. Can set se_act_cls seperately.

Type Default Details
expansion
ni
nf
groups int 1
se_reduction int 16
stride int 1
se_act_cls type ReLU
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sa bool False
sym bool False
norm_type NormType NormType.Batch
act_cls type ReLU
ndim int 2
ks int 3
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

SEResNeXtBlock

 SEResNeXtBlock (expansion, ni, nf, groups=32, se_reduction=16, stride=1,
                 base_width=4, se_act_cls=<class
                 'torch.nn.modules.activation.ReLU'>, attn_mod=None,
                 nh1=None, nh2=None, dw=False, g2=1, sa=False, sym=False,
                 norm_type=<NormType.Batch: 1>, act_cls=<class
                 'torch.nn.modules.activation.ReLU'>, ndim=2, ks=3,
                 block_pool=<function AvgPool>, pool_first=True,
                 stoch_depth=0, padding=None, bias=None, bn_1st=True,
                 transpose=False, init='auto', xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)

A Squeeze and Excitation XResNeXtBlock. Can set se_act_cls seperately.

Type Default Details
expansion
ni
nf
groups int 32
se_reduction int 16
stride int 1
base_width int 4
se_act_cls type ReLU
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sa bool False
sym bool False
norm_type NormType NormType.Batch
act_cls type ReLU
ndim int 2
ks int 3
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

Efficient Channel Attention Blocks


source

ECABlock

 ECABlock (expansion, ni, nf, groups=1, eca_ks=None, stride=1,
           attn_mod=None, nh1=None, nh2=None, dw=False, g2=1, sa=False,
           sym=False, norm_type=<NormType.Batch: 1>, act_cls=<class
           'torch.nn.modules.activation.ReLU'>, ndim=2, ks=3,
           block_pool=<function AvgPool>, pool_first=True, stoch_depth=0,
           padding=None, bias=None, bn_1st=True, transpose=False,
           init='auto', xtra=None, bias_std=0.01,
           dilation:Union[int,Tuple[int,int]]=1, padding_mode:str='zeros',
           device=None, dtype=None)

An Efficient Channel Attention XResNet Block

Type Default Details
expansion
ni
nf
groups int 1
eca_ks NoneType None
stride int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sa bool False
sym bool False
norm_type NormType NormType.Batch
act_cls type ReLU
ndim int 2
ks int 3
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

ECAResNeXtBlock

 ECAResNeXtBlock (expansion, ni, nf, groups=32, eca_ks=None, stride=1,
                  base_width=4, attn_mod=None, nh1=None, nh2=None,
                  dw=False, g2=1, sa=False, sym=False,
                  norm_type=<NormType.Batch: 1>, act_cls=<class
                  'torch.nn.modules.activation.ReLU'>, ndim=2, ks=3,
                  block_pool=<function AvgPool>, pool_first=True,
                  stoch_depth=0, padding=None, bias=None, bn_1st=True,
                  transpose=False, init='auto', xtra=None, bias_std=0.01,
                  dilation:Union[int,Tuple[int,int]]=1,
                  padding_mode:str='zeros', device=None, dtype=None)

An Efficient Channel Attention XResNeXtBlock

Type Default Details
expansion
ni
nf
groups int 32
eca_ks NoneType None
stride int 1
base_width int 4
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sa bool False
sym bool False
norm_type NormType NormType.Batch
act_cls type ReLU
ndim int 2
ks int 3
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

Shuffle Attention Blocks


source

SABlock

 SABlock (expansion, ni, nf, groups=1, sa_grps=64, stride=1,
          attn_mod=None, nh1=None, nh2=None, dw=False, g2=1, sa=False,
          sym=False, norm_type=<NormType.Batch: 1>, act_cls=<class
          'torch.nn.modules.activation.ReLU'>, ndim=2, ks=3,
          block_pool=<function AvgPool>, pool_first=True, stoch_depth=0,
          padding=None, bias=None, bn_1st=True, transpose=False,
          init='auto', xtra=None, bias_std=0.01,
          dilation:Union[int,Tuple[int,int]]=1, padding_mode:str='zeros',
          device=None, dtype=None)

A Shuffle Attention XResNet Block

Type Default Details
expansion
ni
nf
groups int 1
sa_grps int 64
stride int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sa bool False
sym bool False
norm_type NormType NormType.Batch
act_cls type ReLU
ndim int 2
ks int 3
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

SAResNeXtBlock

 SAResNeXtBlock (expansion, ni, nf, groups=32, sa_grps=64, stride=1,
                 base_width=4, attn_mod=None, nh1=None, nh2=None,
                 dw=False, g2=1, sa=False, sym=False,
                 norm_type=<NormType.Batch: 1>, act_cls=<class
                 'torch.nn.modules.activation.ReLU'>, ndim=2, ks=3,
                 block_pool=<function AvgPool>, pool_first=True,
                 stoch_depth=0, padding=None, bias=None, bn_1st=True,
                 transpose=False, init='auto', xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)

A Shuffle Attention XResNeXtBlock

Type Default Details
expansion
ni
nf
groups int 32
sa_grps int 64
stride int 1
base_width int 4
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sa bool False
sym bool False
norm_type NormType NormType.Batch
act_cls type ReLU
ndim int 2
ks int 3
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

Triplet Attention Blocks


source

TABlock

 TABlock (expansion, ni, nf, groups=1, ta_ks=7, stride=1, attn_mod=None,
          nh1=None, nh2=None, dw=False, g2=1, sa=False, sym=False,
          norm_type=<NormType.Batch: 1>, act_cls=<class
          'torch.nn.modules.activation.ReLU'>, ndim=2, ks=3,
          block_pool=<function AvgPool>, pool_first=True, stoch_depth=0,
          padding=None, bias=None, bn_1st=True, transpose=False,
          init='auto', xtra=None, bias_std=0.01,
          dilation:Union[int,Tuple[int,int]]=1, padding_mode:str='zeros',
          device=None, dtype=None)

A Triplet Attention XResNet Block

Type Default Details
expansion
ni
nf
groups int 1
ta_ks int 7
stride int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sa bool False
sym bool False
norm_type NormType NormType.Batch
act_cls type ReLU
ndim int 2
ks int 3
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

TAResNeXtBlock

 TAResNeXtBlock (expansion, ni, nf, groups=32, ta_ks=7, stride=1,
                 base_width=4, attn_mod=None, nh1=None, nh2=None,
                 dw=False, g2=1, sa=False, sym=False,
                 norm_type=<NormType.Batch: 1>, act_cls=<class
                 'torch.nn.modules.activation.ReLU'>, ndim=2, ks=3,
                 block_pool=<function AvgPool>, pool_first=True,
                 stoch_depth=0, padding=None, bias=None, bn_1st=True,
                 transpose=False, init='auto', xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)

A Triplet Attention XResNeXtBlock

Type Default Details
expansion
ni
nf
groups int 32
ta_ks int 7
stride int 1
base_width int 4
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sa bool False
sym bool False
norm_type NormType NormType.Batch
act_cls type ReLU
ndim int 2
ks int 3
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

XResNet

 XResNet (block, expansion, layers, p=0.0, c_in=3, n_out=1000,
          stem_szs=(32, 32, 64), block_szs=[64, 128, 256, 512], widen=1.0,
          sa=False, act_cls=<class 'torch.nn.modules.activation.ReLU'>,
          ndim=2, ks=3, stride=2, stem_layer=<class
          'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
          head_pool=<function AdaptiveAvgPool>, custom_head=None,
          pretrained=False, groups=1, attn_mod=None, nh1=None, nh2=None,
          dw=False, g2=1, sym=False, norm_type=<NormType.Batch: 1>,
          block_pool=<function AvgPool>, pool_first=True, stoch_depth=0,
          padding=None, bias=None, bn_1st=True, transpose=False,
          init='auto', xtra=None, bias_std=0.01,
          dilation:Union[int,Tuple[int,int]]=1, padding_mode:str='zeros',
          device=None, dtype=None)

A flexible version of fastai’s XResNet

Fastxtend’s XResNet allows a custom_head, setting stem_pool, block_pool, and head_pool pooling layers on creation, per ResBlock stochastic depth stoch_depth, and support for more attention modules.

XResNet Models

Predefined XResNet models


source

xresnet101

 xresnet101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
             block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
             act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
             ks=3, stride=2, stem_layer=<class 'fastai.layers.ConvLayer'>,
             stem_pool=<function MaxPool>, head_pool=<function
             AdaptiveAvgPool>, custom_head=None, pretrained=False,
             groups=1, attn_mod=None, nh1=None, nh2=None, dw=False, g2=1,
             sym=False, norm_type=<NormType.Batch: 1>,
             block_pool=<function AvgPool>, pool_first=True,
             stoch_depth=0, padding=None, bias=None, bn_1st=True,
             transpose=False, init='auto', xtra=None, bias_std=0.01,
             dilation:Union[int,Tuple[int,int]]=1,
             padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xresnet50

 xresnet50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
            block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
            act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
            ks=3, stride=2, stem_layer=<class 'fastai.layers.ConvLayer'>,
            stem_pool=<function MaxPool>, head_pool=<function
            AdaptiveAvgPool>, custom_head=None, pretrained=False,
            groups=1, attn_mod=None, nh1=None, nh2=None, dw=False, g2=1,
            sym=False, norm_type=<NormType.Batch: 1>, block_pool=<function
            AvgPool>, pool_first=True, stoch_depth=0, padding=None,
            bias=None, bn_1st=True, transpose=False, init='auto',
            xtra=None, bias_std=0.01,
            dilation:Union[int,Tuple[int,int]]=1,
            padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xresnet34

 xresnet34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
            block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
            act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
            ks=3, stride=2, stem_layer=<class 'fastai.layers.ConvLayer'>,
            stem_pool=<function MaxPool>, head_pool=<function
            AdaptiveAvgPool>, custom_head=None, pretrained=False,
            groups=1, attn_mod=None, nh1=None, nh2=None, dw=False, g2=1,
            sym=False, norm_type=<NormType.Batch: 1>, block_pool=<function
            AvgPool>, pool_first=True, stoch_depth=0, padding=None,
            bias=None, bn_1st=True, transpose=False, init='auto',
            xtra=None, bias_std=0.01,
            dilation:Union[int,Tuple[int,int]]=1,
            padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xresnet18

 xresnet18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
            block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
            act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
            ks=3, stride=2, stem_layer=<class 'fastai.layers.ConvLayer'>,
            stem_pool=<function MaxPool>, head_pool=<function
            AdaptiveAvgPool>, custom_head=None, pretrained=False,
            groups=1, attn_mod=None, nh1=None, nh2=None, dw=False, g2=1,
            sym=False, norm_type=<NormType.Batch: 1>, block_pool=<function
            AvgPool>, pool_first=True, stoch_depth=0, padding=None,
            bias=None, bn_1st=True, transpose=False, init='auto',
            xtra=None, bias_std=0.01,
            dilation:Union[int,Tuple[int,int]]=1,
            padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

XResNeXt Models

Predefined XResNeXt models


source

xresnext101

 xresnext101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
              block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
              act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
              ks=3, stride=2, stem_layer=<class
              'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
              head_pool=<function AdaptiveAvgPool>, custom_head=None,
              pretrained=False, groups=1, attn_mod=None, nh1=None,
              nh2=None, dw=False, g2=1, sym=False,
              norm_type=<NormType.Batch: 1>, block_pool=<function
              AvgPool>, pool_first=True, stoch_depth=0, padding=None,
              bias=None, bn_1st=True, transpose=False, init='auto',
              xtra=None, bias_std=0.01,
              dilation:Union[int,Tuple[int,int]]=1,
              padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xresnext50

 xresnext50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
             block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
             act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
             ks=3, stride=2, stem_layer=<class 'fastai.layers.ConvLayer'>,
             stem_pool=<function MaxPool>, head_pool=<function
             AdaptiveAvgPool>, custom_head=None, pretrained=False,
             groups=1, attn_mod=None, nh1=None, nh2=None, dw=False, g2=1,
             sym=False, norm_type=<NormType.Batch: 1>,
             block_pool=<function AvgPool>, pool_first=True,
             stoch_depth=0, padding=None, bias=None, bn_1st=True,
             transpose=False, init='auto', xtra=None, bias_std=0.01,
             dilation:Union[int,Tuple[int,int]]=1,
             padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xresnext34

 xresnext34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
             block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
             act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
             ks=3, stride=2, stem_layer=<class 'fastai.layers.ConvLayer'>,
             stem_pool=<function MaxPool>, head_pool=<function
             AdaptiveAvgPool>, custom_head=None, pretrained=False,
             groups=1, attn_mod=None, nh1=None, nh2=None, dw=False, g2=1,
             sym=False, norm_type=<NormType.Batch: 1>,
             block_pool=<function AvgPool>, pool_first=True,
             stoch_depth=0, padding=None, bias=None, bn_1st=True,
             transpose=False, init='auto', xtra=None, bias_std=0.01,
             dilation:Union[int,Tuple[int,int]]=1,
             padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xresnext18

 xresnext18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
             block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
             act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
             ks=3, stride=2, stem_layer=<class 'fastai.layers.ConvLayer'>,
             stem_pool=<function MaxPool>, head_pool=<function
             AdaptiveAvgPool>, custom_head=None, pretrained=False,
             groups=1, attn_mod=None, nh1=None, nh2=None, dw=False, g2=1,
             sym=False, norm_type=<NormType.Batch: 1>,
             block_pool=<function AvgPool>, pool_first=True,
             stoch_depth=0, padding=None, bias=None, bn_1st=True,
             transpose=False, init='auto', xtra=None, bias_std=0.01,
             dilation:Union[int,Tuple[int,int]]=1,
             padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

XSE-ResNet Models

Predefined Squeeze and Excitation XResNet models


source

xse_resnet101

 xse_resnet101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xse_resnet50

 xse_resnet50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
               block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, stride=2, stem_layer=<class
               'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
               head_pool=<function AdaptiveAvgPool>, custom_head=None,
               pretrained=False, groups=1, attn_mod=None, nh1=None,
               nh2=None, dw=False, g2=1, sym=False,
               norm_type=<NormType.Batch: 1>, block_pool=<function
               AvgPool>, pool_first=True, stoch_depth=0, padding=None,
               bias=None, bn_1st=True, transpose=False, init='auto',
               xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xse_resnet34

 xse_resnet34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
               block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, stride=2, stem_layer=<class
               'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
               head_pool=<function AdaptiveAvgPool>, custom_head=None,
               pretrained=False, groups=1, attn_mod=None, nh1=None,
               nh2=None, dw=False, g2=1, sym=False,
               norm_type=<NormType.Batch: 1>, block_pool=<function
               AvgPool>, pool_first=True, stoch_depth=0, padding=None,
               bias=None, bn_1st=True, transpose=False, init='auto',
               xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xse_resnet18

 xse_resnet18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
               block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, stride=2, stem_layer=<class
               'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
               head_pool=<function AdaptiveAvgPool>, custom_head=None,
               pretrained=False, groups=1, attn_mod=None, nh1=None,
               nh2=None, dw=False, g2=1, sym=False,
               norm_type=<NormType.Batch: 1>, block_pool=<function
               AvgPool>, pool_first=True, stoch_depth=0, padding=None,
               bias=None, bn_1st=True, transpose=False, init='auto',
               xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

XSE-ResNeXt Models

Predefined Squeeze and Excitation XResNeXt models


source

xse_resnext101

 xse_resnext101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                 block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                 act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                 ndim=2, ks=3, stride=2, stem_layer=<class
                 'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                 head_pool=<function AdaptiveAvgPool>, custom_head=None,
                 pretrained=False, groups=1, attn_mod=None, nh1=None,
                 nh2=None, dw=False, g2=1, sym=False,
                 norm_type=<NormType.Batch: 1>, block_pool=<function
                 AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                 bias=None, bn_1st=True, transpose=False, init='auto',
                 xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xse_resnext50

 xse_resnext50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xse_resnext34

 xse_resnext34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xse_resnext18

 xse_resnext18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

XECA-ResNet Models

Predefined Efficient Channel Attention XResNet models


source

xeca_resnet101

 xeca_resnet101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                 block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                 act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                 ndim=2, ks=3, stride=2, stem_layer=<class
                 'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                 head_pool=<function AdaptiveAvgPool>, custom_head=None,
                 pretrained=False, groups=1, attn_mod=None, nh1=None,
                 nh2=None, dw=False, g2=1, sym=False,
                 norm_type=<NormType.Batch: 1>, block_pool=<function
                 AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                 bias=None, bn_1st=True, transpose=False, init='auto',
                 xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xeca_resnet50

 xeca_resnet50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xeca_resnet34

 xeca_resnet34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xeca_resnet18

 xeca_resnet18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

XECA-ResNeXt Models

Predefined Efficient Channel Attention XResNeXt models


source

xeca_resnext101

 xeca_resnext101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                  block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                  act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                  ndim=2, ks=3, stride=2, stem_layer=<class
                  'fastai.layers.ConvLayer'>, stem_pool=<function
                  MaxPool>, head_pool=<function AdaptiveAvgPool>,
                  custom_head=None, pretrained=False, groups=1,
                  attn_mod=None, nh1=None, nh2=None, dw=False, g2=1,
                  sym=False, norm_type=<NormType.Batch: 1>,
                  block_pool=<function AvgPool>, pool_first=True,
                  stoch_depth=0, padding=None, bias=None, bn_1st=True,
                  transpose=False, init='auto', xtra=None, bias_std=0.01,
                  dilation:Union[int,Tuple[int,int]]=1,
                  padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xeca_resnext50

 xeca_resnext50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                 block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                 act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                 ndim=2, ks=3, stride=2, stem_layer=<class
                 'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                 head_pool=<function AdaptiveAvgPool>, custom_head=None,
                 pretrained=False, groups=1, attn_mod=None, nh1=None,
                 nh2=None, dw=False, g2=1, sym=False,
                 norm_type=<NormType.Batch: 1>, block_pool=<function
                 AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                 bias=None, bn_1st=True, transpose=False, init='auto',
                 xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xeca_resnext34

 xeca_resnext34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                 block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                 act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                 ndim=2, ks=3, stride=2, stem_layer=<class
                 'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                 head_pool=<function AdaptiveAvgPool>, custom_head=None,
                 pretrained=False, groups=1, attn_mod=None, nh1=None,
                 nh2=None, dw=False, g2=1, sym=False,
                 norm_type=<NormType.Batch: 1>, block_pool=<function
                 AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                 bias=None, bn_1st=True, transpose=False, init='auto',
                 xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xeca_resnext18

 xeca_resnext18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                 block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                 act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                 ndim=2, ks=3, stride=2, stem_layer=<class
                 'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                 head_pool=<function AdaptiveAvgPool>, custom_head=None,
                 pretrained=False, groups=1, attn_mod=None, nh1=None,
                 nh2=None, dw=False, g2=1, sym=False,
                 norm_type=<NormType.Batch: 1>, block_pool=<function
                 AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                 bias=None, bn_1st=True, transpose=False, init='auto',
                 xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

XSA-ResNet Models

Predefined Shuffle Attention XResNet models


source

xsa_resnet101

 xsa_resnet101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xsa_resnet50

 xsa_resnet50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
               block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, stride=2, stem_layer=<class
               'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
               head_pool=<function AdaptiveAvgPool>, custom_head=None,
               pretrained=False, groups=1, attn_mod=None, nh1=None,
               nh2=None, dw=False, g2=1, sym=False,
               norm_type=<NormType.Batch: 1>, block_pool=<function
               AvgPool>, pool_first=True, stoch_depth=0, padding=None,
               bias=None, bn_1st=True, transpose=False, init='auto',
               xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xsa_resnet34

 xsa_resnet34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
               block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, stride=2, stem_layer=<class
               'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
               head_pool=<function AdaptiveAvgPool>, custom_head=None,
               pretrained=False, groups=1, attn_mod=None, nh1=None,
               nh2=None, dw=False, g2=1, sym=False,
               norm_type=<NormType.Batch: 1>, block_pool=<function
               AvgPool>, pool_first=True, stoch_depth=0, padding=None,
               bias=None, bn_1st=True, transpose=False, init='auto',
               xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xsa_resnet18

 xsa_resnet18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
               block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, stride=2, stem_layer=<class
               'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
               head_pool=<function AdaptiveAvgPool>, custom_head=None,
               pretrained=False, groups=1, attn_mod=None, nh1=None,
               nh2=None, dw=False, g2=1, sym=False,
               norm_type=<NormType.Batch: 1>, block_pool=<function
               AvgPool>, pool_first=True, stoch_depth=0, padding=None,
               bias=None, bn_1st=True, transpose=False, init='auto',
               xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

XSA-ResNeXt Models

Predefined Shuffle Attention XResNeXt models


source

xsa_resnext101

 xsa_resnext101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                 block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                 act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                 ndim=2, ks=3, stride=2, stem_layer=<class
                 'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                 head_pool=<function AdaptiveAvgPool>, custom_head=None,
                 pretrained=False, groups=1, attn_mod=None, nh1=None,
                 nh2=None, dw=False, g2=1, sym=False,
                 norm_type=<NormType.Batch: 1>, block_pool=<function
                 AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                 bias=None, bn_1st=True, transpose=False, init='auto',
                 xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xsa_resnext50

 xsa_resnext50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xsa_resnext34

 xsa_resnext34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xsa_resnext18

 xsa_resnext18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

XTA-ResNet Models

Predefined Triplet Attention XResNet models


source

xta_resnet101

 xta_resnet101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xta_resnet50

 xta_resnet50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
               block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, stride=2, stem_layer=<class
               'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
               head_pool=<function AdaptiveAvgPool>, custom_head=None,
               pretrained=False, groups=1, attn_mod=None, nh1=None,
               nh2=None, dw=False, g2=1, sym=False,
               norm_type=<NormType.Batch: 1>, block_pool=<function
               AvgPool>, pool_first=True, stoch_depth=0, padding=None,
               bias=None, bn_1st=True, transpose=False, init='auto',
               xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xta_resnet34

 xta_resnet34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
               block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, stride=2, stem_layer=<class
               'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
               head_pool=<function AdaptiveAvgPool>, custom_head=None,
               pretrained=False, groups=1, attn_mod=None, nh1=None,
               nh2=None, dw=False, g2=1, sym=False,
               norm_type=<NormType.Batch: 1>, block_pool=<function
               AvgPool>, pool_first=True, stoch_depth=0, padding=None,
               bias=None, bn_1st=True, transpose=False, init='auto',
               xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xta_resnet18

 xta_resnet18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
               block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
               act_cls=<class 'torch.nn.modules.activation.ReLU'>, ndim=2,
               ks=3, stride=2, stem_layer=<class
               'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
               head_pool=<function AdaptiveAvgPool>, custom_head=None,
               pretrained=False, groups=1, attn_mod=None, nh1=None,
               nh2=None, dw=False, g2=1, sym=False,
               norm_type=<NormType.Batch: 1>, block_pool=<function
               AvgPool>, pool_first=True, stoch_depth=0, padding=None,
               bias=None, bn_1st=True, transpose=False, init='auto',
               xtra=None, bias_std=0.01,
               dilation:Union[int,Tuple[int,int]]=1,
               padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

XTA-ResNeXt Models

Predefined Triplet Attention XResNeXt models


source

xta_resnext101

 xta_resnext101 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                 block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                 act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                 ndim=2, ks=3, stride=2, stem_layer=<class
                 'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                 head_pool=<function AdaptiveAvgPool>, custom_head=None,
                 pretrained=False, groups=1, attn_mod=None, nh1=None,
                 nh2=None, dw=False, g2=1, sym=False,
                 norm_type=<NormType.Batch: 1>, block_pool=<function
                 AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                 bias=None, bn_1st=True, transpose=False, init='auto',
                 xtra=None, bias_std=0.01,
                 dilation:Union[int,Tuple[int,int]]=1,
                 padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xta_resnext50

 xta_resnext50 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xta_resnext34

 xta_resnext34 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None

source

xta_resnext18

 xta_resnext18 (n_out=1000, p=0.0, c_in=3, stem_szs=(32, 32, 64),
                block_szs=[64, 128, 256, 512], widen=1.0, sa=False,
                act_cls=<class 'torch.nn.modules.activation.ReLU'>,
                ndim=2, ks=3, stride=2, stem_layer=<class
                'fastai.layers.ConvLayer'>, stem_pool=<function MaxPool>,
                head_pool=<function AdaptiveAvgPool>, custom_head=None,
                pretrained=False, groups=1, attn_mod=None, nh1=None,
                nh2=None, dw=False, g2=1, sym=False,
                norm_type=<NormType.Batch: 1>, block_pool=<function
                AvgPool>, pool_first=True, stoch_depth=0, padding=None,
                bias=None, bn_1st=True, transpose=False, init='auto',
                xtra=None, bias_std=0.01,
                dilation:Union[int,Tuple[int,int]]=1,
                padding_mode:str='zeros', device=None, dtype=None)
Type Default Details
n_out int 1000
p float 0.0
c_in int 3
stem_szs tuple (32, 32, 64)
block_szs list [64, 128, 256, 512]
widen float 1.0
sa bool False
act_cls type ReLU
ndim int 2
ks int 3
stride int 2
stem_layer type ConvLayer
stem_pool function MaxPool
head_pool function AdaptiveAvgPool
custom_head NoneType None
pretrained bool False
groups int 1
attn_mod NoneType None
nh1 NoneType None
nh2 NoneType None
dw bool False
g2 int 1
sym bool False
norm_type NormType NormType.Batch
block_pool function AvgPool
pool_first bool True
stoch_depth int 0
padding NoneType None
bias NoneType None
bn_1st bool True
transpose bool False
init str auto
xtra NoneType None
bias_std float 0.01
dilation typing.Union[int, typing.Tuple[int, int]] 1
padding_mode str zeros TODO: refine this type
device NoneType None
dtype NoneType None