fixed_sample_integrator module¶
Integrator that does not sample points during the training phase but uses a fixed dataset of points
- class FixedSampleSurveyIntegrator(*args, **kwargs)[source]¶
Bases:
zunis.integration.base_integrator.BaseIntegrator
Integrator that trains its model during the survey phase using a pre-computed sample provided externally
- Parameters
f (callable) – ZuNIS-compatible function
trainer (BasicTrainer) – trainer object used to perform the survey
sample (tuple of torch.Tensor) – (x, fx, px): target-space point batch drawn from some PDF p, function value batch, PDF value batch p(x)
n_iter (int) – number of iterations (used for both survey and refine unless specified)
n_iter_survey (int) – number of iterations for survey
n_iter_refine (int) – number of iterations for refine
n_points (int) – number of points for both survey and refine unless specified
n_points_survey (int) – number of points for survey
n_points_refine (int) – number of points for refine
use_survey (bool) – whether to use the integral estimations from the survey phase. This makes error estimation formally incorrect since samples from the refine depend on the survey training, but these correlation can be negligible in some cases.
verbosity (int) – level of verbosity for the integrator-level logger
trainer_verbosity (int) – level of verbosity for the trainer-level logger
kwargs –
- sample_survey(n_points=None, **kwargs)[source]¶
Sample points from the internally stored sample
- Parameters
n_points (int, None) – size of the batch to select from the sample
kwargs –
- Returns
(x,px,fx): sampled points, sampling distribution PDF values, function values
- Return type
tuple of torch.Tensor
- set_sample(sample)[source]¶
Assign a sample to be trained on
- Parameters
sample (tuple of torch.Tensor) – (x,px,fx): sampled points, sampling distribution PDF values, function values
- set_sample_csv(csv_path, device=None, delimiter=', ', dtype=<class 'float'>)[source]¶
Assign a sample to be trained on from a csv file The file must contain equal length rows with at least four columns, all numerical. All columns but the last two are interpreted as point coordinates, the next-to-last is the point PDF and the last is the function value.
- Parameters
csv_path (str) – path to the csv file
device (torch.device) – device to which to send the sample
delimiter (str) – delimiter of the csv file
- set_sample_pickle(pickle_path, device=None)[source]¶
Assign a sample to be trained on from a pickle file The pickle file must either contain a tuple (x,px,fx) of point batch, PDF value batch, function batch or a mapping with keys “x”, “px”, “fx”. In either case, these batches must be valid inputs for torch.tensor
- Parameters
pickle_path (str) – path to the pickle file.
device (torch.device, None) – device on which to send the sample. If none is provided, flow parameter device will be used