sampling module

Sampling layers The input of flows as we use them is nearly always generated data from some distribution provided with its log-inverse PDF. As a result, it can be convenient to plug the first layer of a flow as a sampling layers that draws points and computes the required PDF

class FactorizedFlowSampler(*, d, prior_1d)[source]

Bases: torch.nn.modules.module.Module

Sample d-dimensional data from a factorized 1D PDF over each dimension The 1D PDF is expected to be a pytorch.probability.Distribution object but it can be any object that implements the sample`and `log_prob methods.

NB: we provide the 1D prior object explicitly as pytorch distributions don’t respond appropriately to the .to(device). To sample on a device, provide a prior initialized with parameters already on the correct device.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(n_batch)[source]

Sample n_batch points and stack them with their jacobians

log_prob(x)[source]

Compute, point-per-point, the log-PDF of a batch of points

training: bool[source]
class FactorizedGaussianSampler(*, d, mu=0.0, sig=1.0, device=None)[source]

Bases: zunis.models.flows.sampling.FactorizedFlowSampler

Factorized gaussian prior Note that tensorflow distribution objects cannot easily be moved devices so specify the right device at initialization.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

training: bool[source]
class UniformSampler(*, d, low=0.0, high=1.0, device=None)[source]

Bases: zunis.models.flows.sampling.FactorizedFlowSampler

Factorized uniform prior Note that tensorflow distribution objects cannot easily be moved devices so specify the right device at initialization.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

training: bool[source]