You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Allen Caldwell presented at DIS2022 an extremely similar work.
PDFs were sampled from the Bayesian posterior, with the only difference (wrt the basic idea) of assuming an extremely restrictive parametrization, and to consume only some DIS data.
In that case, the main point was actually about the data themselves, that are potentially extremely useful, giving a direct handle on the very large-x PDF.
The reason why they have not been implemented in NNPDF yet is that they have Poissonian statistics (with a quite small count for a few bins), so they were violating Gaussian assumptions. This is almost a non-reason, since Poissonian are easy to use for pseudodata generation (sampling), and Gaussian are a good enough approximation in the loss function.
However, we can implement them and compare one-to-one in a similar environment to Caldwell's group result, within two scenarios:
the "apples with apples" comparison, based on the exact same dataset, and
the "full" dataset sampling, where the dataset is extended to MCPDF/NNPDF baseline + these ZEUS data
The text was updated successfully, but these errors were encountered:
Allen Caldwell presented at DIS2022 an extremely similar work.
PDFs were sampled from the Bayesian posterior, with the only difference (wrt the basic idea) of assuming an extremely restrictive parametrization, and to consume only some DIS data.
In that case, the main point was actually about the data themselves, that are potentially extremely useful, giving a direct handle on the very large-x PDF.
The reason why they have not been implemented in NNPDF yet is that they have Poissonian statistics (with a quite small count for a few bins), so they were violating Gaussian assumptions. This is almost a non-reason, since Poissonian are easy to use for pseudodata generation (sampling), and Gaussian are a good enough approximation in the loss function.
However, we can implement them and compare one-to-one in a similar environment to Caldwell's group result, within two scenarios:
The text was updated successfully, but these errors were encountered: