lampe.nn#

Neural networks, layers and modules.

Classes#

ResMLP

Creates a residual multi-layer perceptron (ResMLP).

Descriptions#

class lampe.nn.MLP(in_features, out_features, hidden_features=(64, 64), activation=None, normalize=False, **kwargs)#

Creates a multi-layer perceptron (MLP).

Also known as fully connected feedforward network, an MLP is a sequence of non-linear parametric functions

\[h_{i + 1} = a_{i + 1}(h_i W_{i + 1}^T + b_{i + 1}),\]

over feature vectors \(h_i\), with the input and output feature vectors \(x = h_0\) and \(y = h_L\), respectively. The non-linear functions \(a_i\) are called activation functions. The trainable parameters of an MLP are its weights and biases \(\phi = \{W_i, b_i | i = 1, \dots, L\}\).

Wikipedia

https://wikipedia.org/wiki/Feedforward_neural_network

Parameters
  • in_features (int) – The number of input features.

  • out_features (int) – The number of output features.

  • hidden_features (Sequence[int]) – The numbers of hidden features.

  • activation (Callable[[], Module]) – The activation function constructor. If None, use torch.nn.ReLU instead.

  • normalize (bool) – Whether features are normalized between layers or not.

  • kwargs – Keyword arguments passed to torch.nn.Linear.

Example

>>> net = MLP(64, 1, [32, 16], activation=nn.ELU)
>>> net
MLP(
  (0): Linear(in_features=64, out_features=32, bias=True)
  (1): ELU(alpha=1.0)
  (2): Linear(in_features=32, out_features=16, bias=True)
  (3): ELU(alpha=1.0)
  (4): Linear(in_features=16, out_features=1, bias=True)
)
class lampe.nn.ResMLP(in_features, out_features, hidden_features=(64, 64), **kwargs)#

Creates a residual multi-layer perceptron (ResMLP).

A ResMLP is a series of residual blocks where each block is a (shallow) MLP. Using residual blocks instead of regular non-linear functions prevents the gradients from vanishing, which allows for deeper networks.

Parameters
  • in_features (int) – The number of input features.

  • out_features (int) – The number of output features.

  • hidden_features (Sequence[int]) – The numbers of hidden features.

  • kwargs – Keyword arguments passed to MLP.

Example

>>> net = ResMLP(64, 1, [32, 16], activation=nn.ELU)
>>> net
ResMLP(
  (0): Linear(in_features=64, out_features=32, bias=True)
  (1): Residual(MLP(
    (0): Linear(in_features=32, out_features=32, bias=True)
    (1): ELU(alpha=1.0)
    (2): Linear(in_features=32, out_features=32, bias=True)
  ))
  (2): Linear(in_features=32, out_features=16, bias=True)
  (3): Residual(MLP(
    (0): Linear(in_features=16, out_features=16, bias=True)
    (1): ELU(alpha=1.0)
    (2): Linear(in_features=16, out_features=16, bias=True)
  ))
  (4): Linear(in_features=16, out_features=1, bias=True)
)