-
Notifications
You must be signed in to change notification settings - Fork 1
Running iterative coarse graining for finding an optimal representation
shruthivis edited this page Nov 4, 2018
·
5 revisions
Given a set of increasing bead sizes one per iteration, a system, scoring function, and sampling scheme, the following scripts are called at each iteration:
- Set_next_representation.py (initialize bead map if first iteration or update bead map based on previous iteration)
- Sampling script (to perform limited sampling with the current representation)
- Select_good_scoring_models.py (to obtain good-scoring models after sampling)
- Estimate_sampling_precision.py (to compute bead-wise sampling precisions and identify "imprecise" beads).
In the final iteration, of course, we simply perform step 1 and full, extensive sampling in step 2 instead of limited sampling. We then analyze models in the usual way and perform the complete sampling exhaustiveness test, test fit to data, etc.
For the binary complexes, we used a single master script to run the 4 steps above, taking all parameters from config files. The master scripts are in the repo containing the benchmark, along with the config files, input data files etc.:
For TFIIH, we ran the steps in the master script here.