diff --git a/README.md b/README.md index 979c1a6..b891f61 100644 --- a/README.md +++ b/README.md @@ -48,7 +48,13 @@ After getting invited to this private repo you can do the following: default: '1' ``` -4. Fire off a `curl` command to Github. Replace `` with the one you created in step one above. And replace `` with the one you created in step two above: +4. Trigger your workflow + + 1. **Option 1:** Head to [GH Action tab](https://github.com/NASA-IMPACT/veda-pforge-job-runner/actions). Select the job you want to run from the left-hand navigation, under "Actions". The current job name is "dispatch job". Since the "dispatch job" workflow has a `workflow_dispatch` trigger, you can select "Run workflow" and use the form to input suitable options. + + Screenshot 2024-01-29 at 12 29 04 PM + + 2. **Option 2:** Fire off a `curl` command to Github. Replace `` with the one you created in step one above. And replace `` with the one you created in step two above: ```bash curl -X POST \ @@ -69,9 +75,9 @@ After getting invited to this private repo you can do the following: -d '{"ref":"main", "inputs":{"repo":"https://github.com/pforgetest/gpcp-from-gcs-feedstock.git","ref":"0.10.3","prune":"1"}}' ``` -5. Head to this repository's [GH Action tab](https://github.com/NASA-IMPACT/veda-pforge-job-runner/actions) +6. Head to this repository's [GH Action tab](https://github.com/NASA-IMPACT/veda-pforge-job-runner/actions) -6. If multiple jobs are running you can get help finding your job using the "Actor" filter +7. If multiple jobs are running you can get help finding your job using the "Actor" filter ![](docs/img/xfilter_job.png) @@ -84,4 +90,4 @@ After getting invited to this private repo you can do the following: ![](docs/img/xmonitor_job.png) -9. Continue to come back to the "monitor" subjob to see if it passes or fails. In the future there will be some mild heuristics in place that should tell you why it fails based on what it sniffs in the logs. For now it sniffs for the correct job status within a time limit of two hours \ No newline at end of file +9. Continue to come back to the "monitor" subjob to see if it passes or fails. In the future there will be some mild heuristics in place that should tell you why it fails based on what it sniffs in the logs. For now it sniffs for the correct job status within a time limit of two hours