-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] First pass on serverless infosec-risk-management-bugzilla #42
base: master
Are you sure you want to change the base?
Conversation
I'll spend a little more time on this tomorrow, and I think within a short time we'll be able to switch to this. The biggest functional change is moving from an orderly assignee pickle rotation to a random selection. I think it cleans up the logic a lot, removed the need for a static state file, and also has a greater chance of being random (I don't trust pickle). Other things this includes, is classing out a lot more of the functionality so we can test it better, and IMO, it's just a cleaner read. |
rra_service = config.service('rra') | ||
|
||
Assigner(bmo_url, va_service, dry_run).assign() | ||
Assigner(bmo_url, rra_service, dry_run).assign() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can move this into their own invocations such that the crons are independent, meaning if one breaks they are isolated from each other. The changes of VA breaking and RRA not are unlikely IMO, but making the service and CASA things separate given history seems to make sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that they all depend on bmo, the integration saves on # of requests though they could also pass a state, or just re-query
however, what breaks most often is not us but the services we query, when its BMO, nothing works regardless
maybe we could work around the whole thing by using this: https://pypi.org/project/requests-cache/ so that we can share cache state and alleviate failing for short downtime issues (it will still fail if things stop working for longer than 2 min or whatever we set the hard cache to)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, the benefit doesn't warrant the complexity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the complexity of using requests-cache or?
if we duplicate bmo calls, this is more complex than having the current functions we have
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my point here is that if BMO is down, I'm not sure it's worth trying to code around it and that perhaps it's ok if it just fails. I can't imagine what type of offline state we're benefiting from.
logger.warning('Project {} has no securityPrivacy.security component, skipping!'.format(project_id)) | ||
continue | ||
def next_assignee(self): | ||
return random.choice(self.assignees) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is crude, but I have a strong sense that this will be more random than pickle queue logic (which I don't trust) and will remove state requirements entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i would disagree with this
please back up your trust issue with data and i'll fix it by using another tech (rather than use random, which is unfair for people, you could get 3 issues in a row then nothing for a while and mess your schedule up)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will write some test code to validate my assumptions. That said, I think the pickle strategy in Lambda will need to be restructured regardless at some level due to a lack of persistent storage unless we want to introduce databases/S3/INSERT_OTHER_TECH into the mix.
This is of course, still a work in progress and the method is abstracted, so it should be easy to functionally interate on or change strategies for.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup pickle cannot work in lambda - dynamodb is probably the easiest to call from there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sanity sake: https://gist.github.com/claudijd/15ccfe5f72841ff4bc2143728ae2ffd6
decisionLabel, decision = decision_map.get(bug_resolution) | ||
|
||
# Set it | ||
payload = {'delegatorId': delegator_id, 'decision': decision, 'decisionLabel': decisionLabel } | ||
payload = {'delegatorId': delegator_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use 120col not 80 ;-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this was either my editor or me wanting to make it look nice, I can have a second look soonish.
@@ -0,0 +1,45 @@ | |||
import bugzilla |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
\o/
Remaining items on this, so when I return to it, it's clear what's needed...
|
No description provided.