Skip to content

Commit

Permalink
ci: introduce CI, add flake8, black formatting (#21)
Browse files Browse the repository at this point in the history
* Minor README modification

* Starting CI workflows

* Adding flake8

* Update README with mockup badges

* Adding mock version of ttbar workflow

* Ignoring flake8 to test conditional running of ttbar workflow CI

* Making ttbar workflow CI run on specific branch

* Update README

* Increasing PEP8 compliance - part 1

* Modifying ttbar workflow - installing as package

* Uncommenting linting possibly leading to error at run time.

* Removing python matrix - adding black linting

* Solving indentation

* Ignoring flake8 error to check black linging in Github Actions

* Back to flake8 error - no black error

* Back to original README
  • Loading branch information
XavierAtCERN authored May 13, 2022
1 parent 1752ce6 commit 2549042
Show file tree
Hide file tree
Showing 32 changed files with 11,489 additions and 4,300 deletions.
42 changes: 42 additions & 0 deletions .github/workflows/python_linting.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
name: Linting

on:
push:
branches: [ CI ]
pull_request:
branches: [ CI ]

jobs:
build:

runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.7]

steps:
- uses: actions/checkout@v2

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}

- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install flake8
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics --ignore=E50,F401,F403
- name: Lint with black
uses: psf/black@stable
with:
options: "--check --verbose"
src: "./"
52 changes: 52 additions & 0 deletions .github/workflows/ttbar_workflow.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
name: TTbar

on:
push:
branches: [ CI ]
pull_request:
branches: [ CI ]
workflow_run:
workflows: ["Linting"]
types:
- completed

jobs:
build:

runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.7, 3.8, 3.9]

steps:
- uses: actions/checkout@v2

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}

- name: Set up miniconda
uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
activate-environment: btv_nano_commissioning
python-version: ${{ matrix.python-version }}

- name: Install Dependencies
# might not be needed for local tests
run: |
pip install git+https://github.com/CoffeaTeam/coffea.git
conda install -c conda-forge xrootd
conda install -c conda-forge ca-certificates
conda install -c conda-forge ca-policy-lcg
conda install -c conda-forge dask-jobqueue
conda install -c anaconda bokeh
conda install -c conda-forge 'fsspec>=0.3.3'
conda install dask
pip install -e .
- name: Run workflow
run: |
python runner.py --workflow ttcom --json metadata/test.json
8 changes: 6 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@

# BTVNanoCommissioning
[![Linting](https://github.com/XavierAtCERN/BTVNanoCommissioning/actions/workflows/python_linting.yml/badge.svg)](https://github.com/XavierAtCERN/BTVNanoCommissioning/actions/workflows/python_linting.yml)
[![TTbar](https://github.com/XavierAtCERN/BTVNanoCommissioning/actions/workflows/ttbar_workflow.yml/badge.svg)](https://github.com/XavierAtCERN/BTVNanoCommissioning/actions/workflows/ttbar_workflow.yml)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

Repository for Commissioning studies in the BTV POG based on (custom) nanoAOD samples

## Requirements
Expand All @@ -26,9 +30,9 @@ NOTE: always make sure that conda, python, and pip point to local Miniconda inst
You can either use the default environment `base` or create a new one:
```
# create new environment with python 3.7, e.g. environment of name `coffea`
conda create --name coffea python3.7
conda create --name btv_nano_commissioning python=3.7
# activate environment `coffea`
conda activate coffea
conda activate btv_nano_commissioning
```
You could simply create the environment through the existing `env.yml` under your conda environment
```
Expand Down
50 changes: 35 additions & 15 deletions filefetcher/fetch.py
Original file line number Diff line number Diff line change
@@ -1,35 +1,55 @@
import os
import json
import argparse
parser = argparse.ArgumentParser(description='Run analysis on baconbits files using processor coffea files')
parser.add_argument('-i', '--input', default=r'singlemuon', help='List of samples in DAS (default: %(default)s)')
parser.add_argument('-s', '--site', default=r'global', help='Site (default: %(default)s)')
parser.add_argument('-o', '--output', default=r'singlemuon', help='Site (default: %(default)s)')

parser = argparse.ArgumentParser(
description="Run analysis on baconbits files using processor coffea files"
)
parser.add_argument(
"-i",
"--input",
default=r"singlemuon",
help="List of samples in DAS (default: %(default)s)",
)
parser.add_argument(
"-s", "--site", default=r"global", help="Site (default: %(default)s)"
)
parser.add_argument(
"-o", "--output", default=r"singlemuon", help="Site (default: %(default)s)"
)
args = parser.parse_args()
fset = []

with open(args.input) as fp:
lines = fp.readlines()
for line in lines:
with open(args.input) as fp:
lines = fp.readlines()
for line in lines:
fset.append(line)

fdict = {}

instance = 'prod/'+args.site
instance = "prod/" + args.site


xrd = 'root://xrootd-cms.infn.it//'
xrd = "root://xrootd-cms.infn.it//"

for dataset in fset:
print(fset)
flist = os.popen(("/cvmfs/cms.cern.ch/common/dasgoclient -query='instance={} file dataset={}'").format(instance,fset[fset.index(dataset)].rstrip())).read().split('\n')
flist = (
os.popen(
(
"/cvmfs/cms.cern.ch/common/dasgoclient -query='instance={} file dataset={}'"
).format(instance, fset[fset.index(dataset)].rstrip())
)
.read()
.split("\n")
)
dictname = dataset.rstrip()
if dictname not in fdict:
fdict[dictname] = [xrd+f for f in flist if len(f) > 1]
else: #needed to collect all data samples into one common key "Data" (using append() would introduce a new element for the key)
fdict[dictname].extend([xrd+f for f in flist if len(f) > 1])
fdict[dictname] = [xrd + f for f in flist if len(f) > 1]
else: # needed to collect all data samples into one common key "Data" (using append() would introduce a new element for the key)
fdict[dictname].extend([xrd + f for f in flist if len(f) > 1])

#pprint.pprint(fdict, depth=1)
# pprint.pprint(fdict, depth=1)

with open('../metadata/%s.json'%(args.output), 'w') as fp:
with open("../metadata/%s.json" % (args.output), "w") as fp:
json.dump(fdict, fp, indent=4)
Loading

0 comments on commit 2549042

Please sign in to comment.