Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] First pass on serverless infosec-risk-management-bugzilla #42

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

claudijd
Copy link
Contributor

No description provided.

@claudijd claudijd changed the title First pass on serverless infosec-risk-management-bugzilla [WIP] First pass on serverless infosec-risk-management-bugzilla Oct 25, 2018
@claudijd
Copy link
Contributor Author

I'll spend a little more time on this tomorrow, and I think within a short time we'll be able to switch to this.

The biggest functional change is moving from an orderly assignee pickle rotation to a random selection. I think it cleans up the logic a lot, removed the need for a static state file, and also has a greater chance of being random (I don't trust pickle).

Other things this includes, is classing out a lot more of the functionality so we can test it better, and IMO, it's just a cleaner read.

rra_service = config.service('rra')

Assigner(bmo_url, va_service, dry_run).assign()
Assigner(bmo_url, rra_service, dry_run).assign()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can move this into their own invocations such that the crons are independent, meaning if one breaks they are isolated from each other. The changes of VA breaking and RRA not are unlikely IMO, but making the service and CASA things separate given history seems to make sense.

Copy link
Contributor

@gdestuynder gdestuynder Nov 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that they all depend on bmo, the integration saves on # of requests though they could also pass a state, or just re-query

however, what breaks most often is not us but the services we query, when its BMO, nothing works regardless

maybe we could work around the whole thing by using this: https://pypi.org/project/requests-cache/ so that we can share cache state and alleviate failing for short downtime issues (it will still fail if things stop working for longer than 2 min or whatever we set the hard cache to)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, the benefit doesn't warrant the complexity.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the complexity of using requests-cache or?
if we duplicate bmo calls, this is more complex than having the current functions we have

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think my point here is that if BMO is down, I'm not sure it's worth trying to code around it and that perhaps it's ok if it just fails. I can't imagine what type of offline state we're benefiting from.

logger.warning('Project {} has no securityPrivacy.security component, skipping!'.format(project_id))
continue
def next_assignee(self):
return random.choice(self.assignees)
Copy link
Contributor Author

@claudijd claudijd Oct 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is crude, but I have a strong sense that this will be more random than pickle queue logic (which I don't trust) and will remove state requirements entirely.

Copy link
Contributor

@gdestuynder gdestuynder Nov 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i would disagree with this
please back up your trust issue with data and i'll fix it by using another tech (rather than use random, which is unfair for people, you could get 3 issues in a row then nothing for a while and mess your schedule up)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will write some test code to validate my assumptions. That said, I think the pickle strategy in Lambda will need to be restructured regardless at some level due to a lack of persistent storage unless we want to introduce databases/S3/INSERT_OTHER_TECH into the mix.

This is of course, still a work in progress and the method is abstracted, so it should be easy to functionally interate on or change strategies for.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup pickle cannot work in lambda - dynamodb is probably the easiest to call from there

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

decisionLabel, decision = decision_map.get(bug_resolution)

# Set it
payload = {'delegatorId': delegator_id, 'decision': decision, 'decisionLabel': decisionLabel }
payload = {'delegatorId': delegator_id,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please use 120col not 80 ;-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was either my editor or me wanting to make it look nice, I can have a second look soonish.

@@ -0,0 +1,45 @@
import bugzilla
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

\o/

@claudijd
Copy link
Contributor Author

claudijd commented Nov 28, 2018

Remaining items on this, so when I return to it, it's clear what's needed...

  • Adapt the credential manager to support more than just the default key manager (so app specific key manager)

  • Break down and write some tests for the CASA class

  • Do some trial dry run live tests

  • peer review and acceptance of the dry run live tests

  • Need to work out configs for prod/dev

  • Merge and deploy

  • Kill old job runner

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants