-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pineappl integration with validphys #1529
Conversation
This is now "done" (let's see whether the fit bot is working) save a few details that I will do directly here since I already changed a lot It is not ready for review until I get to do a full fit with all datasets we have instead just a few selected ones in case there are unforeseen problems (I'm sure there will be), @felixhekhorn is running now the ekos to prepare the full batch of dataset for the "baseline". Then there is the (important) detail of the fact that I'm probably wasting a lot of memory and time with the spaguetti code that I've put in Other than that, I'm quite happy with the result. Some of the layers of the rightfully-hated dictionary-complexity of |
I've updated the secrets, chances are that I've broken things more than they were but we'll discover it together. |
Got hit by the indexing... |
If the fit bot works I'm happy with the current state. Still not ready for review because I need to review the changes myself with a fresh pair of eyes. |
28a7605
to
82874b1
Compare
Greetings from your nice fit 🤖 !
Check the report carefully, and please buy me a ☕ , or better, a GPU 😉! |
Greetings from your nice fit 🤖 !
Check the report carefully, and please buy me a ☕ , or better, a GPU 😉! |
4956de9
to
8ee9005
Compare
993b2e1
to
7380d89
Compare
@felixhekhorn @alecandido is there a known problem with eko and python 3.9? The tests only fail when eko is included in the recipe (despite eko never being imported for any of the tests). Asking before comparing the two logs side by side to see what has been installed differently... |
Mmm ... we're not aware of problems and indeed the unit tests and the isolated benchmarks work in 3.9 I'd say the most likely candidate would be |
The only big change that I can see is the libgfortran from 7 to 11... and when I do everything locally (with python 3.9) installing eko makes only the following change to the standard installation:
Which (locally) is not enough to break anything. Sigh. I really don't understand what's happening (also because if I run the test that seems to fail by itself in the github CI, it works, but not when it is run with the rest?) |
The numpy downgrade is expected since the current numba (0.55.1) only works with |
Actually, a backport for v0.55.2 is even scheduled: numba/numba#8067 |
I'm sorry for the people subscribed to this PR... For future reference, there is an action to open a terminal inside the CI does work but as soon as I try to do something slightly complicated (and that includes getting the conda package with scp) it crashes :/ I thought it was my connection but changing it didn't fix it. With respect to the actual problem it only happens when eko is installed and only in the CI for python 3.9. I have not been able to reproduce it in any way in my computer... the only thing left is try to get the same docker image that the github CI is using... |
d1b114c
to
4306fb6
Compare
Since this branch is perfectly ok and the silly error is still unknown (we now believe it is one of the packages that I'll investigate what the exact error is since @andreab1997 will need As @RoyStegeman asked I've passed black through all the files I've touched in a single commit and as @Zaharid asked I've limited the damage to the parts of said files that I've actually modified (you can see in the changes that I did revert most if not all the spurious changes). I hope I managed to make everyone happy at once this time. Thanks @felixhekhorn for the help with debugging this issue. |
Actually, looking back at the logs:
However, I hope that NNPDF/eko#122 will solve it before. |
It passed in 3.8 a few times. |
Compatible, but I don't see where |
https://github.com/NNPDF/nnpdf/runs/6545874878?check_suite_focus=true Here it passed the entire test, but in general for quite a few you can see the 3.8 reached a few steps further than 3.9 (before 3.9 crashes the entire thing) |
@scarlehoff maybe a few of these points (if not all) can be ticked |
Closing in favour of #1578 |
EDIT:
This branch is outdated with respect to master as it points to a master pre-thcovmat in n3fit, #1578 is a rebase of this branch to 08/07/2022 master: 3098d0a
As promised, the pineappl integration with vp is in preparation.
In this first commit the pineappl tables are already compatible with vp and can be used for instance in
predictions
(like with the commondata reader, I put a jupyter notebook at the root of the repository).The fit part I will do tomorrow, which is a bit more complicated (if I want to keep both the old and new theories) since I will have to reconstruct the fktable from the pandas dataframe.
I will also need to modify
FKTableData
to hold more interesting information, but nothing to worrysome.This will also close #1541
This PR has ended up being a tad bigger than one would like (and broader in scope). Introducing the
pineappl
tables meant it was easier to simplify some of the other stuff (such as the reader for the old fktables) rather than have both of them live together. As such, the review will be more painful than I hoped, so in order to help the reviewer (and myself in 5 years time when I have to remember what I did) I've written some notes on the changes.TODO before merging
pineappl
,eko
inconda-forge
oldmode
flag from the loaderHow to install the necessary packages
Since they are still not in conda you will need to install
pineappl
in order to use this branch, luckily it is available in the python repos:or
(just in case, make sure that the pineappl version that get installed is
0.5.2
)Where to get pineappl grids
I've put in the root of the nnpdf server a folder called
pineappl_ingredients.tar
which contains ayamldb
(which goes intoNNPDF/data
and apineappls
which goes intoNNPDF/data/theory_200
Notes on the changes
Jupyter notebook: it's there just so we can play with it a bit, but I'll remove it once this is approved for merging.
all
n3fit
files: since now we have in vp a few objects that contain exactly the same information that was passed to some of the dictionaries inn3fit
, I've removed said dictionaries. This is the only reason these files have changed so you might as well ignore them.config.py
: separated posdataset and integdatasetconvolution.py
: in order to keep usingtruediv
forRATIO
(which is nice because that means that tensorflow, numpy, pandas or any other library know what to do with the operation) it is necessary to make the pandas denominator into a numpy array. I think this is reasonable since in this case the denominator is usually a total cross section so the indexes are not supposed to match the numerator.core.py
:Added
load_commondata
toDataSetInput
andDataGroupSpec
so that one can get just theCommonData
fromlibNNPDF
instead of the wholeDataSet
.Modified
FKTableSpec
so that it can load both the new and old fktables.Added an
IntegrabilitySetSpec
which is equal toPositivitySetSpec
.coredata.py
I've added to
FKTableData
methods that generate the information needed to fit an fktable:luminosity_mapping
andget_np_fktable
.I've moved the application of the
cfactors
toFKTableData
to be equivalent towith_cuts
(so that no funny games need to be played with the dataclass outside).Added also a
_protected
flag sincecfactors
andcuts
were done with the old fktables in mind and as explained inconvolution.py
they wil find a different number of points. When the repetition flag was found in apfelcomb, it gets saved into the yaml database and cuts or cfactors are applied accordingly.Note that
FKTableData
works the same no matter where the fktables came from (pineappl or old)fkparser.py
Moved the application of
cfactors
away (see above).loader.py
I've added
check_fkyaml
to load the newyaml
files. Eventually the yaml information will come from the new commondata format so at the moment I've just hardcoded the path of the yamldb inside the data folder.I've separated the positivity and integrability loading. This was not strictly necessary but it facilitated the
n3fit_data.py
below and it was something that annoyed me since a long time.For testing I've added a flag to
check_dataset
such that if you useoldmode
as a cfactor, the old fktables are used regardless. This is useful for debugging and will be removed alongside the jupyter notebook for before merging (or as soon as the new theory is prepared).At the moment whether to use pineappl or not does not depend on the theory since the pineappl tables are not a part of any theory at the moment.
n3fit_data.py
This has been greatly simplified. Most notably, since like
mask_fk_tables
have been removed.fitting_data_dict
is no longer a dictionary with a list of dictionaries inside but contains a list ofFittableDatasets
which are coming from the outside.The most interesting part here is that this means that issue n3fit memory usage #1541 is also solved. I had to do something else which was creating a
TupleComp
class for the masks that depend on the name and the seed, but it is a small price to pay.For the rest most of the changes in this module are just removing stuff.
n3fit_data_utils.py
Here don't even look at the diff, this module has been basically rewritten from scratch. Some lines just happen to be similar.
I've created the
FittableDataSet
which contains all the information necessary to fit a given dataset other than the central value.imho, once we have the new common data format and pure python
datasets
this should be just an extension of those but it's a bit tricky because in its current form this can be shared between replicas and if we add the cv that won't be possible. But anyway, that's a problem for the future weeks.pineparser.py
Contains all the logic for loading the new fktables into
coredata.FKTableData
objects completely equivalent to what vp creates reading the old fktables with pandas. This is going to probably be moved topineko
since we believe building and reading the fktables should be one single thing. I'll do it when this Move permanent part of fkutils here pineko#12 PR is merged.results.py
This is the reason for the
load_commondata
needed incore.py
.Currently getting results needed not only the central value of the data but also to load the
libNNPDF
fktable even if they were not used anywhere. I could live with that (just a waster of memory) when they exist, but this is no longer the case if you only have the pineappl version.