Shortcuts

ride.utils.discriminative_lr

Module Contents

Classes

PrePostInitMeta

A metaclass that calls optional __pre_init__ and __post_init__ methods

Module

Same as nn.Module, but no need for subclasses to call super().__init__

ParameterModule

Register a lone parameter p in a module.

Functions

children(m)

Get children of m.

num_children(m)

Get number of children modules in m.

children_and_parameters(m)

Return the children of m and its direct parameters not registered in modules.

even_mults(→ numpy.ndarray)

Build log-stepped array from start to stop in n steps.

lr_range(→ numpy.ndarray)

Build differential learning rates from lr.

unfreeze_layers(→ None)

Unfreeze or freeze all layers

build_param_dicts(→ Union[int, list])

Either return the number of layers with requires_grad is True

discriminative_lr(→ Union[list, numpy.ndarray, ...)

Flatten our model and generate a list of dictionnaries to be passed to the

Attributes

logger

Developped by the Fastai team for the Fastai library

flatten_model

Modified version of lr_range from fastai

ride.utils.discriminative_lr.logger[source]

Developped by the Fastai team for the Fastai library From the fastai library https://www.fast.ai and https://github.com/fastai/fastai

class ride.utils.discriminative_lr.PrePostInitMeta[source]

Bases: type

A metaclass that calls optional __pre_init__ and __post_init__ methods

class ride.utils.discriminative_lr.Module[source]

Bases: torch.nn.Module

Same as nn.Module, but no need for subclasses to call super().__init__

__pre_init__()[source]
class ride.utils.discriminative_lr.ParameterModule(p: torch.nn.Parameter)[source]

Bases: Module

Register a lone parameter p in a module.

forward(x)[source]
ride.utils.discriminative_lr.children(m: torch.nn.Module)[source]

Get children of m.

ride.utils.discriminative_lr.num_children(m: torch.nn.Module)[source]

Get number of children modules in m.

ride.utils.discriminative_lr.children_and_parameters(m: torch.nn.Module)[source]

Return the children of m and its direct parameters not registered in modules.

ride.utils.discriminative_lr.even_mults(start: float, stop: float, n: int) numpy.ndarray[source]

Build log-stepped array from start to stop in n steps.

ride.utils.discriminative_lr.flatten_model[source]

Modified version of lr_range from fastai https://github.com/fastai/fastai/blob/master/fastai/basic_train.py#L185

ride.utils.discriminative_lr.lr_range(net: torch.nn.Module, lr: slice, model_len: int) numpy.ndarray[source]

Build differential learning rates from lr.

ride.utils.discriminative_lr.unfreeze_layers(model: torch.nn.Sequential, unfreeze: bool = True) None[source]

Unfreeze or freeze all layers

ride.utils.discriminative_lr.build_param_dicts(layers: torch.nn.Sequential, lr: list = [0], return_len: bool = False) Union[int, list][source]

Either return the number of layers with requires_grad is True or return a list of dictionnaries containing each layers on its associated LR” Both weight and bias are check for requires_grad is True

ride.utils.discriminative_lr.discriminative_lr(net: torch.nn.Module, lr: slice, unfreeze: bool = False) Union[list, numpy.ndarray, torch.nn.Sequential][source]

Flatten our model and generate a list of dictionnaries to be passed to the optimizer. - If only one learning rate is passed as a slice the last layer will have the corresponding learning rate and all other ones will have lr/10 - If two learning rates are passed such as slice(min_lr, max_lr) the last layer will have max_lr as a learning rate and the first one will have min_lr. All middle layers will have learning rates logarithmically interpolated ranging from min_lr to max_lr

Read the Docs v: stable
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.