-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2024-06-25 | MAIN --> PROD | DEV (e00d9db) --> STAGING #4018
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Add secondary db to tf to run db_to_db backup * add secondary db to local stack * Update command in makefile * bump s3 version and add tag * add dev maintained s3 * update service bindings * Remove binding steps from deploy * Add historic data load * Remove s3 bucket sharing We opted to remove this as the decision was made that staging would have its own dedicated backups bucket, and if files in the prod s3 bucket are to be shared, we can create a new bucket for the specific purpose of syncing then sharing * Add dedicated backups bucket in each environment * give org_name for backups bucket * Preliminary bash script for backups * chmod +x * File Rename * Function Modifications * Add restore script * Add backup workflow * fix typo * chmod +x * Small modifications to ensure util works properly * Have the version be an input * Update db2db operation * Testing workflow * scheduled_backup workflow test version bump to v0.1.2 of util * s3_restore workflow test * db_restore workflow test * Backup workflow Run via workflow_dispatch: * Quote to prevent globbing * Delete - File no longer used * Rename and replace workflow call Potentially going to delete * Update pre-deploy backup call * Add restore workflow * New scheduled backup workflow Now with a matrix, for all environments * CODEOWNERS update * Add docs * Point source to correct repo Though the redirect will still happen, the repo was moved to gsa-tts org * Version bump and modify backup logic * Change folder path * Add daily backup option * Update verbiage and workflow options * change pathing for s3 dumps * deploy_backup task test * scheduled_backup task test * Increase task instance size * scheduled_backup task test v2 * daily_backup task test * typo fixes * s3_restore task test * s3_restore task test v2 * db_restore task test * Final cleanup * remove (restore test) * typo fix * remove restore workflows per discussion with matt/tim * workflow cleanup and removal of unused items * Fix a small rebase issue
jadudm
added
autogenerated
Automated pull request creation
automerge
Used for automated deployments
labels
Jun 25, 2024
Terraform plan for staging Plan: 2 to add, 0 to change, 0 to destroy.Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.staging-backups-bucket.cloudfoundry_service_instance.bucket will be created
+ resource "cloudfoundry_service_instance" "bucket" {
+ id = (known after apply)
+ name = "backups"
+ replace_on_params_change = false
+ replace_on_service_plan_change = false
+ service_plan = "021bb2a3-7e11-4fc2-b06b-d9f5938cd806"
+ space = "7bbe587a-e8ee-4e8c-b32f-86d0b0f1b807"
+ tags = [
+ "s3",
]
}
# module.staging.module.snapshot-database.cloudfoundry_service_instance.rds will be created
+ resource "cloudfoundry_service_instance" "rds" {
+ id = (known after apply)
+ json_params = jsonencode(
{
+ storage = 50
}
)
+ name = "fac-snapshot-db"
+ replace_on_params_change = false
+ replace_on_service_plan_change = false
+ service_plan = "815c6069-289a-4444-ba99-40f0fa03a8f5"
+ space = "7bbe587a-e8ee-4e8c-b32f-86d0b0f1b807"
+ tags = [
+ "rds",
]
}
Plan: 2 to add, 0 to change, 0 to destroy.
Warning: Argument is deprecated
with module.staging-backups-bucket.cloudfoundry_service_instance.bucket,
on /tmp/terraform-data-dir/modules/staging-backups-bucket/s3/main.tf line 14, in resource "cloudfoundry_service_instance" "bucket":
14: recursive_delete = var.recursive_delete
Since CF API v3, recursive delete is always done on the cloudcontroller side.
This will be removed in future releases
(and 6 more similar warnings elsewhere) ✅ Plan applied in Deploy to Staging Environment #229 |
Terraform plan for production Plan: 8 to add, 4 to change, 0 to destroy.Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
!~ update in-place
Terraform will perform the following actions:
# module.production.cloudfoundry_app.postgrest will be updated in-place
!~ resource "cloudfoundry_app" "postgrest" {
!~ docker_image = "ghcr.io/gsa-tts/fac/postgrest@sha256:4b903ac223ea5f583bd870a328a3cf54d19267e5b5abfff896863b37f7cb68b6" -> "ghcr.io/gsa-tts/fac/postgrest@sha256:08852a35ccf68490cf974e2b1a47d19480457c24b2244fa9f302ed785bd89462"
id = "70ac44be-3507-4867-a75f-c2d1ab12ee89"
name = "postgrest"
# (17 unchanged attributes hidden)
# (1 unchanged block hidden)
}
# module.production.module.clamav.cloudfoundry_app.clamav_api will be updated in-place
!~ resource "cloudfoundry_app" "clamav_api" {
!~ docker_image = "ghcr.io/gsa-tts/fac/clamav@sha256:b0e61e765f6c9a861cb8a4fbcfbd1df3e45fcbfa7cd78cd67c16d2e540d5301d" -> "ghcr.io/gsa-tts/fac/clamav@sha256:ba95b2eab2464f762071de942b60190be73c901a17a143b234ac3a53dc947d68"
id = "5d0afa4f-527b-472a-8671-79a60335417f"
name = "fac-av-production"
# (17 unchanged attributes hidden)
# (1 unchanged block hidden)
}
# module.production.module.file_scanner_clamav.cloudfoundry_app.clamav_api will be updated in-place
!~ resource "cloudfoundry_app" "clamav_api" {
!~ docker_image = "ghcr.io/gsa-tts/fac/clamav@sha256:b0e61e765f6c9a861cb8a4fbcfbd1df3e45fcbfa7cd78cd67c16d2e540d5301d" -> "ghcr.io/gsa-tts/fac/clamav@sha256:ba95b2eab2464f762071de942b60190be73c901a17a143b234ac3a53dc947d68"
id = "a14bb29f-8276-4967-9754-cf9c4187ebe3"
name = "fac-av-production-fs"
# (17 unchanged attributes hidden)
# (1 unchanged block hidden)
}
# module.production.module.https-proxy.cloudfoundry_app.egress_app will be updated in-place
!~ resource "cloudfoundry_app" "egress_app" {
id = "5e81ca8b-99cf-41f8-ae42-76652d51a44c"
name = "https-proxy"
!~ source_code_hash = "e246274fca627d48afccde010de949371f24b6c9974c48aa91044acd36654fa8" -> "9fcf4a7f6abfc9a220de2b8bb97591ab490a271ac0933b984f606f645319e1a4"
# (21 unchanged attributes hidden)
# (1 unchanged block hidden)
}
# module.production.module.newrelic.newrelic_alert_policy.alert_policy will be created
+ resource "newrelic_alert_policy" "alert_policy" {
+ account_id = (known after apply)
+ id = (known after apply)
+ incident_preference = "PER_POLICY"
+ name = "production-alert-policy"
}
# module.production.module.newrelic.newrelic_notification_channel.email_channel will be created
+ resource "newrelic_notification_channel" "email_channel" {
+ account_id = 3919076
+ active = true
+ destination_id = (known after apply)
+ id = (known after apply)
+ name = "production_email_notification_channel"
+ product = "IINT"
+ status = (known after apply)
+ type = "EMAIL"
+ property {
+ key = "subject"
+ value = "{{issueTitle}}"
# (2 unchanged attributes hidden)
}
}
# module.production.module.newrelic.newrelic_notification_destination.email_destination will be created
+ resource "newrelic_notification_destination" "email_destination" {
+ account_id = 3919076
+ active = true
+ guid = (known after apply)
+ id = (known after apply)
+ last_sent = (known after apply)
+ name = "email_destination"
+ status = (known after apply)
+ type = "EMAIL"
+ property {
+ key = "email"
+ value = "[email protected], [email protected], [email protected]"
# (2 unchanged attributes hidden)
}
}
# module.production.module.newrelic.newrelic_nrql_alert_condition.error_transactions will be created
+ resource "newrelic_nrql_alert_condition" "error_transactions" {
+ account_id = 3919076
+ aggregation_delay = "120"
+ aggregation_method = "event_flow"
+ aggregation_window = 60
+ enabled = true
+ entity_guid = (known after apply)
+ id = (known after apply)
+ name = "Error Transactions (%)"
+ policy_id = (known after apply)
+ type = "static"
+ violation_time_limit = (known after apply)
+ violation_time_limit_seconds = 259200
+ critical {
+ operator = "above"
+ threshold = 5
+ threshold_duration = 300
+ threshold_occurrences = "all"
}
+ nrql {
+ query = "SELECT percentage(count(*), WHERE error is true) FROM Transaction"
}
+ warning {
+ operator = "above"
+ threshold = 3
+ threshold_duration = 300
+ threshold_occurrences = "all"
}
}
# module.production.module.newrelic.newrelic_nrql_alert_condition.infected_file_found will be created
+ resource "newrelic_nrql_alert_condition" "infected_file_found" {
+ account_id = (known after apply)
+ aggregation_delay = "120"
+ aggregation_method = "event_flow"
+ aggregation_window = 60
+ enabled = true
+ entity_guid = (known after apply)
+ fill_option = "static"
+ fill_value = 0
+ id = (known after apply)
+ name = "Infected File Found!"
+ policy_id = (known after apply)
+ type = "static"
+ violation_time_limit = (known after apply)
+ violation_time_limit_seconds = 259200
+ critical {
+ operator = "above_or_equals"
+ threshold = 1
+ threshold_duration = 300
+ threshold_occurrences = "at_least_once"
}
+ nrql {
+ query = "SELECT count(*) FROM Log WHERE tags.space_name ='production' and message LIKE '%ScanResult.INFECTED%'"
}
}
# module.production.module.newrelic.newrelic_one_dashboard.search_dashboard will be created
+ resource "newrelic_one_dashboard" "search_dashboard" {
+ account_id = (known after apply)
+ guid = (known after apply)
+ id = (known after apply)
+ name = "Search Dashboard (production)"
+ permalink = (known after apply)
+ permissions = "public_read_only"
+ page {
+ guid = (known after apply)
+ name = "Search"
+ widget_billboard {
+ column = 1
+ height = 3
+ id = (known after apply)
+ legend_enabled = true
+ row = 1
+ title = "Searches Per Hour"
+ width = 3
+ nrql_query {
+ account_id = (known after apply)
+ query = "SELECT count(*) as 'Total', rate(count(*), 1 minute) as 'Per Minute' FROM Transaction where request.uri like '%/dissemination/search%' and request.method = 'POST' and appName = 'gsa-fac-production' since 1 hours AGO COMPARE WITH 1 week ago"
}
}
+ widget_line {
+ column = 4
+ height = 3
+ id = (known after apply)
+ legend_enabled = true
+ row = 1
+ title = "Search Traffic"
+ width = 6
+ nrql_query {
+ account_id = (known after apply)
+ query = "SELECT count(*) FROM Transaction where request.uri like '%/dissemination/search%' and request.method = 'POST' and appName = 'gsa-fac-production' since 4 hours AGO COMPARE WITH 1 week ago TIMESERIES"
}
}
+ widget_line {
+ column = 1
+ height = 3
+ id = (known after apply)
+ legend_enabled = true
+ row = 2
+ title = "Search Response Time"
+ width = 6
+ nrql_query {
+ account_id = (known after apply)
+ query = "FROM Metric SELECT average(newrelic.timeslice.value) WHERE appName = 'gsa-fac-production' WITH METRIC_FORMAT 'Custom/search' TIMESERIES SINCE 1 day ago COMPARE WITH 1 week ago"
}
}
}
}
# module.production.module.newrelic.newrelic_workflow.alert_workflow will be created
+ resource "newrelic_workflow" "alert_workflow" {
+ account_id = (known after apply)
+ destinations_enabled = true
+ enabled = true
+ enrichments_enabled = true
+ guid = (known after apply)
+ id = (known after apply)
+ last_run = (known after apply)
+ muting_rules_handling = "DONT_NOTIFY_FULLY_MUTED_ISSUES"
+ name = "production_alert_workflow"
+ workflow_id = (known after apply)
+ destination {
+ channel_id = (known after apply)
+ name = (known after apply)
+ notification_triggers = (known after apply)
+ type = (known after apply)
}
+ issues_filter {
+ filter_id = (known after apply)
+ name = "filter"
+ type = "FILTER"
+ predicate {
+ attribute = "labels.policyIds"
+ operator = "EXACTLY_MATCHES"
+ values = (known after apply)
}
}
}
# module.production.module.snapshot-database.cloudfoundry_service_instance.rds will be created
+ resource "cloudfoundry_service_instance" "rds" {
+ id = (known after apply)
+ json_params = jsonencode(
{
+ storage = 50
}
)
+ name = "fac-snapshot-db"
+ replace_on_params_change = false
+ replace_on_service_plan_change = false
+ service_plan = "58b899e8-eb36-441f-b406-d2f5b1e49c00"
+ space = "5593dba8-7023-49a5-bdbe-e809fe23edf9"
+ tags = [
+ "rds",
]
}
Plan: 8 to add, 4 to change, 0 to destroy.
Warning: Argument is deprecated
with module.domain.cloudfoundry_service_instance.external_domain_instance,
on /tmp/terraform-data-dir/modules/domain/domain/main.tf line 45, in resource "cloudfoundry_service_instance" "external_domain_instance":
45: recursive_delete = var.recursive_delete
Since CF API v3, recursive delete is always done on the cloudcontroller side.
This will be removed in future releases
(and 6 more similar warnings elsewhere) 📝 Plan generated in Pull Request Checks #3225 |
Minimum allowed coverage is Generated by 🐒 cobertura-action against e00d9db |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is an auto-generated pull request to merge main into prod for a staging release on 2024-06-25 with the last commit being merged as e00d9db