Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix alpha value scheme C #53

Merged
merged 2 commits into from
Oct 20, 2022
Merged

Fix alpha value scheme C #53

merged 2 commits into from
Oct 20, 2022

Conversation

andreab1997
Copy link
Contributor

@andreab1997 andreab1997 commented Oct 20, 2022

After several discussions we agreed on the fact that for the scheme C of scale variations we need to evaluate alpha_s at the process scale and not at the factorization scale (assuming no renormalization scale variations of course).

This reverts a previous commit 6568293

@andreab1997
Copy link
Contributor Author

Question: Do you know how pineappl is doing scheme C? (in the right or wrong way?) Because of course that would affect comparator.py

@alecandido
Copy link
Member

PineAPPL itself is not doing scheme C, since it does not do any scheme at all: grids are providing separately the orders and the logs, so values of alphas in particular are provided to PineAPPL from the outside.

@@ -394,7 +394,9 @@ def fk(self, name, grid_path, tcard, pdf):
q2_grid = operators["Q2grid"].keys()
# PineAPPL wants alpha_s = 4*pi*a_s
# remember that we already accounted for xif in the opcard generation
alphas_values = [4.0 * np.pi * astrong.a_s(xir * xir * Q2) for Q2 in q2_grid]
alphas_values = [
4.0 * np.pi * astrong.a_s(xir * xir * Q2 / xif / xif) for Q2 in q2_grid
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change:

  • Q2 to muf2
  • q2_grid to muf2_grid

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now it is already done - but as said yesterday, instead I would rather have kept Q2 and adjust the q2_grid definition with the division by xif ... this way it would be even more natural I think ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that, since this works for both scheme B and scheme C and since the q2_grid is really a muf2 grid only in the scheme C case, whatever we may write would be inconsistent (or not clear) with either one or the other scheme.
As it is now (so as @alecandido suggested) is coherent with scheme C but not with scheme B (because in scheme B case operators["Q2grid"].keys() is really Q2 not muf2. Instead what @felixhekhorn suggests would be perfectly coherent with both but less clear for scheme B because we would have q2_grid = operators["Q2grid"].keys() / xif /xif with xif=1.0 (and so making no difference). Personally I prefer the @felixhekhorn suggestion but I believe it is just a matter of taste.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now it is already done - but as said yesterday, instead I would rather have kept Q2 and adjust the q2_grid definition with the division by xif ... this way it would be even more natural I think ...

#54

Copy link
Member

@alecandido alecandido Oct 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q2 does not exist, it is defined only for DIS (ok, and basic DY). It is always the central factorization scale, so let's start give it the proper name, and avoid confusion.

@andreab1997
Copy link
Contributor Author

PineAPPL itself is not doing scheme C, since it does not do any scheme at all: grids are providing separately the orders and the logs, so values of alphas in particular are provided to PineAPPL from the outside.

Yes of course but we are not actually providing the values of alpha_s to pineappl, rather we are providing the function to get the alpha_s values. Fro example, in comparator.py we have
before = np.array( pine.convolute_with_one( pdgid, pdfset.xfxQ2, pdfset.alphasQ2, order_mask=order_mask, xi=((xir, xif),), ) )
So my question is: What value of alpha_s is using pineappl? (so what Q2 is plugging inside pdfset.alphasQ2() )

@codecov-commenter
Copy link

Codecov Report

Merging #53 (ba5814d) into main (fc965e5) will decrease coverage by 41.72%.
The diff coverage is 0.00%.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##             main      #53       +/-   ##
===========================================
- Coverage   88.51%   46.79%   -41.73%     
===========================================
  Files          16       16               
  Lines         592      592               
===========================================
- Hits          524      277      -247     
- Misses         68      315      +247     
Flag Coverage Δ
bench ?
unittests 46.79% <0.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/pineko/theory.py 19.29% <0.00%> (-65.50%) ⬇️
src/pineko/comparator.py 18.18% <0.00%> (-72.73%) ⬇️
src/pineko/theory_card.py 16.00% <0.00%> (-68.00%) ⬇️
src/pineko/evolve.py 24.07% <0.00%> (-64.82%) ⬇️
src/pineko/parser.py 27.27% <0.00%> (-63.64%) ⬇️
src/pineko/configs.py 54.00% <0.00%> (-40.00%) ⬇️
src/pineko/cli/check.py 55.76% <0.00%> (-38.47%) ⬇️
src/pineko/cli/compare.py 82.35% <0.00%> (-17.65%) ⬇️
src/pineko/cli/opcard.py 83.33% <0.00%> (-16.67%) ⬇️
... and 1 more

@felixhekhorn felixhekhorn added the bug Something isn't working label Oct 20, 2022
@cschwan
Copy link
Contributor

cschwan commented Oct 20, 2022

@andreab1997 The strong coupling is always evaluated with the renormalization scale, and the renormalization scale is usually, but not always, the same as the factorization scale before the grids are evolved.

@felixhekhorn
Copy link
Contributor

@alecandido Why are we failing for pylint? poetry install should install all groups, shouldn't it?

run: poetry install --no-interaction

and so also test

pineko/pyproject.toml

Lines 48 to 52 in fc965e5

[tool.poetry.group.test.dependencies]
pytest = "^7.1.3"
pytest-cov = "^4.0.0"
pytest-env = "^0.6.2"
pylint = "^2.11.1"

@andreab1997
Copy link
Contributor Author

@alecandido Why are we failing for pylint? poetry install should install all groups, shouldn't it?

run: poetry install --no-interaction

and so also test

pineko/pyproject.toml

Lines 48 to 52 in fc965e5

[tool.poetry.group.test.dependencies]
pytest = "^7.1.3"
pytest-cov = "^4.0.0"
pytest-env = "^0.6.2"
pylint = "^2.11.1"

Don't worry for this. I have solved this is #52. We just need to merge

@andreab1997
Copy link
Contributor Author

@andreab1997 The strong coupling is always evaluated with the renormalization scale, and the renormalization scale is usually, but not always, the same as the factorization scale before the grids are evolved.

Ok so, just to be sure, if I do pine.convolute_with_one( pdgid, pdfset.xfxQ2, pdfset.alphasQ2, order_mask=order_mask, xi=((1.0, 2.0),)
alpha_s would be evaluated just in Q2 or in xif*xif*Q2 = 2.0*2.0*Q2?

@alecandido
Copy link
Member

alecandido commented Oct 20, 2022

@alecandido Why are we failing for pylint? poetry install should install all groups, shouldn't it?

Not the optional ones, of course (unless explicitly specified).

pineko/pyproject.toml

Lines 45 to 48 in fc965e5

[tool.poetry.group.test]
optional = true
[tool.poetry.group.test.dependencies]

@cschwan
Copy link
Contributor

cschwan commented Oct 20, 2022

@andreab1997 Neither of those two options, it will be evaluated at xir * xir * mur2, where xir is 1.0 in your example and mur2 is set internally and may differ w.r.t. Q2. As I said this usually isn't the case because in pinefarm we always set mur = muf, but for some grids this is the case, see for instance here NNPDF/pineappl#25 (comment).

@andreab1997
Copy link
Contributor Author

@andreab1997 Neither of those two options, it will be evaluated at xir * xir * mur2, where xir is 1.0 in your example and mur2 is set internally and may differ w.r.t. Q2. As I said this usually isn't the case because in pinefarm we always set mur = muf, but for some grids this is the case, see for instance here NNPDF/pineappl#25 (comment).

Ok, of course when you say muf in this case you mean the central (meaning that even if I set xif = 2.0, pineappl would still use the central), right? In this case then fine, after this is merged we will do the same.

Moreover I believe that this [NNPDF/pineappl#25 (comment)] is actually included in the Q2grid inside the operator card, right? So that if you don't set mur=muf for a certain grid but you do something different (if you add ptjet^2 for example), then the Q2 grid in the operator card will be just different taking that into account.

@felixhekhorn
Copy link
Contributor

@andreab1997 Neither of those two options, it will be evaluated at xir * xir * mur2, where xir is 1.0 in your example and mur2 is set internally and may differ w.r.t. Q2. As I said this usually isn't the case because in pinefarm we always set mur = muf, but for some grids this is the case, see for instance here NNPDF/pineappl#25 (comment).

this is an additional "bug" - i.e. so far we were relying on the simplification muf=mur. If you wish to fix it here, you should change the definition of now-called muf2_grid to be computed from the grid (using .axes()) instead of the operator.

(@alecandido this is the case we were discussing yesterday)

@alecandido
Copy link
Member

(@alecandido this is the case we were discussing yesterday)

Good to know!

And yes, essentially @andreab1997 there are two central scales: mur2 and muf2.
The factorization and renormalization scales are usually set to the process scale, that for DIS is a unique one (the EW boson virtuality), and the same for basic DY, but in general is not so simple.

For example, if you are not fully inclusive, you might measure the pt or rapidity of particles, at that point you have more options for "the process scale", not only Q2.
So, there are cases in which the central scale for ren is not the same of fact. In general, Q2 is only one out of many scales of the process, so there is nothing special with it (that's why we should drop that name everywhere).

The unambiguous nomenclature is $z$ for momentum fraction (or $\xi$, since one is the variable of the PDF, and the other of the coefficient function), and $\mu_F^2$ for the factorization scale, i.e. the PDF scale.
Usually, people call them $x$ and $Q^2$, but they are explicitly ambiguous: they are DIS variables, i.e. Bjorken $x$ and EW boson virtuality. But back in the days, PDFs and structure function were the same thing, i.e. the LO picture, and this is why they adopted this terminology, and people are still using it.

@andreab1997
Copy link
Contributor Author

(@alecandido this is the case we were discussing yesterday)

Good to know!

And yes, essentially @andreab1997 there are two central scales: mur2 and muf2. The factorization and renormalization scales are usually set to the process scale, that for DIS is a unique one (the EW boson virtuality), and the same for basic DY, but in general is not so simple.

For example, if you are not fully inclusive, you might measure the pt or rapidity of particles, at that point you have more options for "the process scale", not only Q2. So, there are cases in which the central scale for ren is not the same of fact. In general, Q2 is only one out of many scales of the process, so there is nothing special with it (that's why we should drop that name everywhere).

The unambiguous nomenclature is z for momentum fraction (or ξ, since one is the variable of the PDF, and the other of the coefficient function), and μF2 for the factorization scale, i.e. the PDF scale. Usually, people call them x and Q2, but they are explicitly ambiguous: they are DIS variables, i.e. Bjorken x and EW boson virtuality. But back in the days, PDFs and structure function were the same thing, i.e. the LO picture, and this is why they adopted this terminology, and people are still using it.

Yes I was asking exactly because I am aware of this "issue" :)

This was referenced Oct 20, 2022
@andreab1997
Copy link
Contributor Author

Ok I am merging, thanks for this!

@andreab1997 andreab1997 merged commit a0b2378 into main Oct 20, 2022
@giacomomagni giacomomagni deleted the fix_sv_alpha branch April 19, 2023 08:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants