Skip to content

Running iterative coarse graining for finding an optimal representation

shruthivis edited this page Nov 4, 2018 · 5 revisions

Workflow

Given a set of increasing bead sizes one per iteration, a system, scoring function, and sampling scheme, the following scripts are called at each iteration:

  1. Set_next_representation.py (initialize bead map if first iteration or update bead map based on previous iteration)
  2. Sampling script (to perform limited sampling with the current representation)
  3. Select_good_scoring_models.py (to obtain good-scoring models after sampling)
  4. Estimate_sampling_precision.py (to compute bead-wise sampling precisions and identify "imprecise" beads).

In the final iteration, of course, we simply perform step 1 and full, extensive sampling in step 2 instead of limited sampling. We then analyze models in the usual way and perform the complete sampling exhaustiveness test, test fit to data, etc.

Examples

For the binary complexes, we used a single master script to run the 4 steps above, taking all parameters from config files. The master scripts are in the repo containing the benchmark, along with the config files, input data files etc.:

For TFIIH, we ran the steps in the master script here.