Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raw and weighted yields in significances output is incorrectly multiplied by luminosity #82

Open
jmrolsson opened this issue May 23, 2017 · 1 comment
Assignees
Labels

Comments

@jmrolsson
Copy link

https://github.com/kratsg/optimization/blame/master/root_optimize/command_line.py#L177

This should be fixed so that it's only scaled if the count_type is scaled.

@kratsg kratsg self-assigned this May 23, 2017
@kratsg kratsg added the bug label May 23, 2017
@kratsg
Copy link
Owner

kratsg commented Sep 10, 2019

Refers to this line:

sig_dict = dict([('hash', cuthash)] + [('significance_{0:s}'.format(counts_type), utils.get_significance(args.lumi*1000*counts, args.lumi*1000*total_bkgd[cuthash][counts_type], args.insignificanceThreshold, args.bkgdUncertainty, args.bkgdStatUncertainty, total_bkgd[cuthash]['raw'])) for counts_type, counts in counts_dict.iteritems()] + [('yield_{0:s}'.format(counts_type), {'sig': args.lumi*1000*counts, 'bkg': args.lumi*1000*total_bkgd[cuthash][counts_type]}) for counts_type, counts in counts_dict.iteritems()])

This is going to be improved by removing lumi calculations entirely. The suggestion will be that the correct weight is applied, instead of having an arbitrary scaling passed in through the CLI -- and instead by creating a branch (or an alias) for that arbitrary scaling as needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants