lampe.utils#
General purpose helpers.
Functions#
Evaluates a function \(f(x)\) over a multi-dimensional domain split into grid cells. |
Classes#
Creates a callable that performs gradient descent (GD) optimization steps for parameters \(\phi\) with respect to differentiable loss values. |
Descriptions#
- class lampe.utils.GDStep(optimizer, clip=None)#
Creates a callable that performs gradient descent (GD) optimization steps for parameters \(\phi\) with respect to differentiable loss values.
The callable takes a scalar loss \(l\) as input, performs a step
\[\phi \gets \text{GD}(\phi, \nabla_{\!\phi} \, l)\]and returns the loss, detached from the computational graph. To prevent invalid parameters, steps are skipped if not-a-number (NaN) or infinite values are found in the gradient. This feature requires CPU-GPU synchronization, which could be a bottleneck for some applications.
- Parameters
optimizer (Optimizer) – An optimizer instance (e.g.
torch.optim.SGD
).clip (float) – The norm at which the gradients are clipped. If
None
, gradients are not clipped.
- lampe.utils.gridapply(f, domain, bins=128, batch_size=4096)#
Evaluates a function \(f(x)\) over a multi-dimensional domain split into grid cells. Instead of evaluating the function cell by cell, batches are given to the function.
- Parameters
- Returns
The domain grid and the corresponding values.
- Return type
Example
>>> f = lambda x: -(x**2).sum(dim=-1) / 2 >>> lower, upper = torch.zeros(3), torch.ones(3) >>> x, y = gridapply(f, (lower, upper), bins=8) >>> x.shape torch.Size([8, 8, 8, 3]) >>> y.shape torch.Size([8, 8, 8])