Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calibration systematics studies in Twinkles ? #415

Open
rbiswas4 opened this issue Nov 17, 2016 · 3 comments
Open

Calibration systematics studies in Twinkles ? #415

rbiswas4 opened this issue Nov 17, 2016 · 3 comments

Comments

@rbiswas4
Copy link
Member

From the SN meeting, at which everyone should have stayed back for :) a discussion started about how we could study the impact of photometric calibration in supernova cosmology through Twinkles. This is one of the biggest problems in SN cosmology, and most SN cosmologists would call this the largest source of 'systematic errors' in current analyses as demonstrated in those analyses.

The origin of the problem is in the reference catalog that is used for calibration. In real life, this is a catalog of astrophysical point sources of specific classes (specific white dwarfs) with extremely good spectrophotometric measurement and the class of the source (people like some very specific kinds of sources and maybe @wmwv can give us more background on why one can't just measure the spectra to death rather than mixing measurements with theoretical understanding). Currently, I believe we are putting all of the stars in the simulation in the truth catalog, and so we are not seeing this effect.

The discussion (with @wmwv, @djreiss) ended with some suggestion for our Twinkles current Run 3, where we could emulate this by redoing the stuff from the DM pipeline by sub selecting stars in the reference catalog to smaller numbers and try to keep the magnitude distributions as expected for calibration stars. In future runs we could actually put in some biases if we want to study this in greater detail

@wmwv
Copy link
Contributor

wmwv commented Nov 21, 2016

Such explorations can come after pixel-level analyses and so could be carried out without significant computational effort.

We should think about bookkeeping. Calibration information is associated with the calexp dataset in the Butler. We don't really want full copies of the processed images for each of the difference reference sets. We just need to do catalog-level operations.

@drphilmarshall
Copy link
Contributor

drphilmarshall commented Nov 28, 2016 via email

@wmwv
Copy link
Contributor

wmwv commented Nov 28, 2016

Calibration uncertainties tend to be different than most other because they are essentially all about the covariance. But that makes them difficult to simulate in a single run, because you will just get some fixed offset. All SN Ia cosmological analyses in the past 10 years have done a detailed simulation of the calibration uncertainties and the resulting effect on measurements of w.

I suggest we take the target goals for the SRD, and randomly sample within them. The question then becomes how to sample? I suggest
[By "filter" below I mean system transmission function associated with the given filter.]

  1. The reference catalog should have associated errors and be randomly resampled within those errors. We may already do this.
  2. Simulate a color-dependent calibration, such that $\Delta g$ error is a function of $r-i$ color. We could do this by having an input reference calibration catalog where a series of different $\Delta g$ shift is introduced as a function of $r-i$ color.
  3. Simulate relative filter-to-filter errors ("absolute color" errors): E.g., there is some unknown offset between g and r in the calibration to AB. There is a 6 by 6 matrix that describes the covariance between the calibration of the 6 filters. This could be implemented as a post-catalog manipulation of just the 6 absolute calibration numbers of the filters.

Doing 1 or 2 would involve making sure that updating the calexp_md fluxmag0 for an image based on its already-processed catalog was easy. Or, even better, add an additional simulation wrapper level that preserves the re-entrability of the data on disk.

I think we should not simulate calibration errors as a function of airmass or atmospheric conditions.

It's unfortunately somewhat beyond scope for the Twinkles team to generate an independent reasonable estimate of the final calibration uncertainties for LSST. Calibration is hard. SDSS was ground-breaking in doing ~2%. DES sounds like it's doing well, and that makes me optimistic that LSST can as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants