Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[2023-10-20] main -> prod #2565

Merged
merged 4 commits into from
Oct 20, 2023
Merged

[2023-10-20] main -> prod #2565

merged 4 commits into from
Oct 20, 2023

Conversation

jadudm
Copy link
Contributor

@jadudm jadudm commented Oct 20, 2023

PR checklist: submitters

  • Link to an issue if possible. If there’s no issue, describe what your branch does. Even if there is an issue, a brief description in the PR is still useful.
  • List any special steps reviewers have to follow to test the PR. For example, adding a local environment variable, creating a local test file, etc.
  • For extra credit, submit a screen recording like this one.
  • Make sure you’ve merged main into your branch shortly before creating the PR. (You should also be merging main into your branch regularly during development.)
  • Make sure you’ve accounted for any migrations. When you’re about to create the PR, bring up the application locally and then run git status | grep migrations. If there are any results, you probably need to add them to the branch for the PR. Your PR should have only one new migration file for each of the component apps, except in rare circumstances; you may need to delete some and re-run python manage.py makemigrations to reduce the number to one. (Also, unless in exceptional circumstances, your PR should not delete any migration files.)
  • Make sure that whatever feature you’re adding has tests that cover the feature. This includes test coverage to make sure that the previous workflow still works, if applicable.
  • Make sure the full-submission.cy.js Cypress test passes, if applicable.
  • Do manual testing locally. Our tests are not good enough yet to allow us to skip this step. If that’s not applicable for some reason, check this box.
  • Verify that no Git surgery was necessary, or, if it was necessary at any point, repeat the testing after it’s finished.
  • Once a PR is merged, keep an eye on it until it’s deployed to dev, and do enough testing on dev to verify that it deployed successfully, the feature works as expected, and the happy path for the broad feature area (such as submission) still works.

PR checklist: reviewers

  • Pull the branch to your local environment and run make docker-clean; make docker-first-run && docker compose up; then run docker compose exec web /bin/bash -c "python manage.py test"
  • Manually test out the changes locally, or check this box to verify that it wasn’t applicable in this case.
  • Check that the PR has appropriate tests. Look out for changes in HTML/JS/JSON Schema logic that may need to be captured in Python tests even though the logic isn’t in Python.
  • Verify that no Git surgery is necessary at any point (such as during a merge party), or, if it was, repeat the testing after it’s finished.

The larger the PR, the stricter we should be about these points.

asteel-gsa and others added 3 commits October 20, 2023 12:36
* Fixes logic in check_federal_program_total

The check was being applied against the federal_program_total line.

However, it was summing all expenditures and expecting to find the sum
of everything in `amount_expended` to appear in `federal_program_total`.

What the check wanted to do was to compare against
`total_federal_expenditures.` I suspect this is because I did not ticket
the issue clearly.

This makes the check do the right thing.

* Fixes my fix.

I was confused.

I forgot that an outcome from earlier was that we needed to fix the
workbook generator.

Therefore, I confused myself on the check_federal_program_total code.

I reverted my adventure there (leaving it as-is from HSMD's work), and
instead fixed the workbook generator to now create/insert the `cfda_key`
column.

I regenerated 100010 to verify that this now works. And, the end-to-end
code now works as well (again), because it does the right thing.

test_commands is a casualty. It is testing a fixture command that is
aging, and I think that end-to-end and the workbook generator will
replace it.

* That was too easy.

I spent too much time with date formats.

* Linting.

* Before main sync

* Addes auditor cert, fixes E2E regression

The text on the table changed, so the regression had to change. I've
switched to a regex instead of the full contents of the table caption.

This also makes the E2E workbook test code actually move audits into the
"Accepted" table.

This preps the E2E code for use in actuall regression testing.

* Linting.

* Apply suggestions from code review

Accepting cleanup suggestions.

Co-authored-by: Phil Dominguez <[email protected]>

* Removing a test.

---------

Co-authored-by: Phil Dominguez <[email protected]>
@jadudm jadudm requested a review from a team October 20, 2023 14:03
@jadudm jadudm temporarily deployed to staging October 20, 2023 14:03 — with GitHub Actions Inactive
@jadudm jadudm temporarily deployed to production October 20, 2023 14:03 — with GitHub Actions Inactive
@github-actions
Copy link
Contributor

github-actions bot commented Oct 20, 2023

Terraform plan for production

No changes. Your infrastructure matches the configuration.
No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.

Warning: Argument is deprecated

  with module.domain.cloudfoundry_service_instance.external_domain_instance,
  on /tmp/terraform-data-dir/modules/domain/domain/main.tf line 45, in resource "cloudfoundry_service_instance" "external_domain_instance":
  45:   recursive_delete = var.recursive_delete

Since CF API v3, recursive delete is always done on the cloudcontroller side.
This will be removed in future releases

(and 3 more similar warnings elsewhere)

✅ Plan applied in Deploy to Production Environment #24

@github-actions
Copy link
Contributor

github-actions bot commented Oct 20, 2023

Terraform plan for staging

No changes. Your infrastructure matches the configuration.
No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.

Warning: Argument is deprecated

  with module.staging.module.database.cloudfoundry_service_instance.rds,
  on /tmp/terraform-data-dir/modules/staging.database/database/main.tf line 14, in resource "cloudfoundry_service_instance" "rds":
  14:   recursive_delete = var.recursive_delete

Since CF API v3, recursive delete is always done on the cloudcontroller side.
This will be removed in future releases

(and 2 more similar warnings elsewhere)

✅ Plan applied in Deploy to Staging Environment #71

@github-actions
Copy link
Contributor

github-actions bot commented Oct 20, 2023

File Coverage Missing
All files 86%
api/serializers.py 88% 177-178 183 188
api/test_views.py 96% 105
api/uei.py 87% 17-18 87 119-120 164 168-169
api/views.py 97% 196-197 204-205 226 362-363
audit/forms.py 47% 22-29 142-149
audit/intake_to_dissemination.py 92% 67-68 201-207 257
audit/models.py 86% 57 59 64 66 213 246 419 437-438 446 468 544-545 549 557 566 572
audit/test_commands.py 87%
audit/test_mixins.py 90% 112-113 117-119 184-185 189-191
audit/test_validators.py 95% 436 440 608-609 848 855 862 869
audit/test_workbooks_should_fail.py 85% 56 83-84 88
audit/test_workbooks_should_pass.py 90% 56 81
audit/utils.py 70% 13 21 33-35 38
audit/validators.py 94% 137 189 288-289 304-305 486-490 495-499 515-524
audit/views.py 42% 87-108 131-132 206-207 252-253 264-265 267-271 318-331 334-348 353-366 383-389 394-414 441-445 450-479 522-526 531-551 578-582 587-616 659-663 668-680 683-693 698-710 737-738 743-792 795-835 838-855
audit/cross_validation/additional_ueis.py 93% 33
audit/cross_validation/check_award_ref_declaration.py 90%
audit/cross_validation/check_award_reference_uniqueness.py 93%
audit/cross_validation/check_certifying_contacts.py 87%
audit/cross_validation/check_findings_count_consistency.py 91%
audit/cross_validation/check_ref_number_in_cap.py 90%
audit/cross_validation/check_ref_number_in_findings_text.py 90%
audit/cross_validation/errors.py 78% 30 69
audit/cross_validation/naming.py 68% 178-182
audit/cross_validation/submission_progress_check.py 92% 62 79
audit/cross_validation/tribal_data_sharing_consent.py 81% 33 36 40
audit/cross_validation/validate_general_information.py 93% 28-29
audit/fixtures/single_audit_checklist.py 55% 146-183 229-238
audit/intakelib/exceptions.py 71% 7-9 12
audit/intakelib/intermediate_representation.py 94% 23-24 87 125 158 182-185
audit/intakelib/mapping_audit_findings.py 97% 51
audit/intakelib/mapping_audit_findings_text.py 97% 51
audit/intakelib/mapping_federal_awards.py 93% 95
audit/intakelib/mapping_util.py 40% 28 32 36 70-99 104 124-135 142-158 162-167 171-185 190 195-215 220-237 258 263-264 273-279 289 304 309
audit/intakelib/checks/check_all_unique_award_numbers.py 79% 24
audit/intakelib/checks/check_aln_three_digit_extension_pattern.py 73% 27 36
audit/intakelib/checks/check_cardinality_of_passthrough_names_and_ids.py 91%
audit/intakelib/checks/check_cluster_name_always_present.py 82% 21
audit/intakelib/checks/check_cluster_total.py 87% 46
audit/intakelib/checks/check_federal_award_passed_always_present.py 82% 18
audit/intakelib/checks/check_findings_grid_validation.py 84% 57
audit/intakelib/checks/check_is_a_workbook.py 68% 16
audit/intakelib/checks/check_loan_guarantee.py 90% 51
audit/intakelib/checks/check_look_for_empty_rows.py 91% 18
audit/intakelib/checks/check_missing_award_numbers.py 72% 16 22-23
audit/intakelib/checks/check_no_major_program_no_type.py 72% 22 31 40
audit/intakelib/checks/check_no_repeat_findings.py 76% 17 26
audit/intakelib/checks/check_other_cluster_names.py 81% 24 34
audit/intakelib/checks/check_passthrough_name_when_no_direct.py 88% 9 47
audit/intakelib/checks/check_sequential_award_numbers.py 76% 14 22
audit/intakelib/checks/check_start_and_end_rows_of_all_columns_are_same.py 89% 14
audit/intakelib/checks/check_state_cluster_names.py 65% 23-24 34
audit/intakelib/checks/check_uei_exists.py 65% 17-18
audit/intakelib/checks/runners.py 90% 120 126
audit/intakelib/checks/util.py 84% 16 33 38
audit/management/commands/load_fixtures.py 46% 39-45
audit/viewlib/submission_progress_view.py 89% 111 171-172
audit/viewlib/tribal_data_consent.py 34% 23-41 44-79
audit/viewlib/upload_report_view.py 26% 32-35 44 91-117 120-170 178-209
cms/views.py 57% 11-16 29-30
config/urls.py 71% 87
dissemination/models.py 99% 460
dissemination/migrations/0002_general_fac_accepted_date.py 47% 10-12
djangooidc/backends.py 78% 32 57-63
djangooidc/exceptions.py 66% 19 21 23 28
djangooidc/oidc.py 16% 32-35 45-51 64-70 92-149 153-199 203-226 230-275 280-281 286
djangooidc/views.py 80% 22 43 114
djangooidc/tests/common.py 96%
report_submission/forms.py 92% 35
report_submission/views.py 76% 83 215-216 218 240-241 260-261 287-396 399-409
report_submission/templatetags/get_attr.py 76% 8 11-14 18
support/admin.py 88% 76 79 84 91-97 100-102
support/cog_over.py 90% 30-33 86 93 145
support/signals.py 66% 23-24 33-34
support/test_cog_over.py 98% 134-135 224
support/management/commands/seed_cog_baseline.py 98% 20-21
tools/update_program_data.py 89% 96
users/auth.py 95% 40-41
users/models.py 97% 51-52
users/fixtures/user_fixtures.py 91%

Minimum allowed coverage is 90%

Generated by 🐒 cobertura-action against 92dec12

@github-merge-queue github-merge-queue bot temporarily deployed to dev October 20, 2023 14:16 Inactive
@github-merge-queue github-merge-queue bot temporarily deployed to meta October 20, 2023 14:16 Inactive
@github-merge-queue github-merge-queue bot temporarily deployed to meta October 20, 2023 14:36 Inactive
@github-merge-queue github-merge-queue bot temporarily deployed to dev October 20, 2023 14:36 Inactive
@github-merge-queue github-merge-queue bot temporarily deployed to dev October 20, 2023 14:36 Inactive
* This... validated two workbooks...

Huh. That worked.

* Fixes an extraction bug

When a cell has a None value, it should become "" in the JSON.

* Broke things apart.

This made no changes other than to take the previous work and break it
into semantically meaningful files/pieces.

Now, each workbook has a file.

The IR work has its own file.

* Small improvements, merging main.

* Passes regression tests.

* Passes all regressions, demonstrates new checks

This commit now passes all workbooks through the IR.

The README is started, but not complete.

It demonstrates the authoring of checks in keeping with the
cross-validations. The errors generated pass through to the frontend
correctly, and visualize like all other workbook upload errors.

A no-op check is provided. It always passes.

A check for a missing UEI is provided. It correctly stops empty UEIs. If
a UEI is present, but does not match the regex, it passes through the
IR check, and is caught by the schema validation.

* Fixes state/other cluster errors

This takes a recent helpdesk ticket and demonstrates how to improve the
error messages.

In this case, a user had

N/A N/A N/A

for cluster name, state cluster name, and other cluster name.

This causes horrible JSON Schema validation errors.

Now, we tell them exactly what they need to do in order to fix the
workbook.

Once the instructions are followed, the user's workbook is fixed, and it
passes JSON Schema validation.:

* is_direct and passthrough names

This adds two checks:

1. The user failed to include the is_direct value(s)
2. The user failed to include a passthrough name when is_direct is `N`.

* Adds a loan balance check

The JSON Schema validation for this was confusing.

When N, we expect nothing in the balance column.

When Y, we expect a balance.

This adds a check that provides a clear error message.

* This checks that the workbook is right

This adds up-front checks for:

1. Making sure we are getting a FAC workbook (it checks for a
   Coversheet)
2. It checks to see if it is the right workbook for a given validation
   section

Now runs notes-to-sefa-specific checks.

* Ran black.

However, I might have run the wrong version.

* Fixing import, removing excel.

Not needed anymore, and confuses the linter.

* Handles the is_major and audit_report_type

This Y/N pair breaks in JSON Schemas.

Better error reporting on this pair of columns.

* Adds an award finding check

Disambinguates the Y/N for prior reference numbers.

* Checks for empty rows in the middle of the data.

How do people do these things?

* Checks award numbers

1. Makes sure they are all unique
2. Makes sure they're in the right sequence

Removed some logging from no_major_program_type

* Checking for blank fields

1. Cannot have blank findings counts
2. Cannot have blank cluster names

When these are blank, they sometimes get through to the schema, and then
really bad errors come back.

* Moving import to a better place w.r.t. the list.

All of these probably have to go to the top of the file for the linter.
it made sense to me to put them near the lists I was building for now.

* Added other half of passthrough names

Handles more workbook cases now.

* Replaced some print statements

Can't chase down an error with a weird... Excel issue where fonts are
involved.

* Ready to fix tests...

Changing to the IR broke a lot of tests.

* Fixing more missing columns

ZD 114 had many fields missing; so many that it would make it through
the improved checks and still fail the schema.

Now, 114 would be guided through the submission by these errors. Their
workbook can be started in the errored state, and the error messages
will guide them to a valid workbook.

* Adds transforms

The first transform is on Notes to SEFA.

We have an invisible column called seq_number. Users somehow manage to
delete things in a way that the cell ends up with a `REF!` instead of a
number. The solution is to replace the column with values that are
generated computationally on the server side.

It is not clear that we use this value?

* Using section names, adding prechecks

Broke the general checks out, so they can run before transforms.

Some transforms may have to run before. (I found one in notes-to-sefa.)

So, we need to make sure we're in the right workbook. Then we can clean
up the data. Then we can begin checking it for semantics. Then we can
rewrite it into a JSON doc.

So.

Those changes are in this commit, and some tightening on the passthrough
names. All inspired by the same workbook...

* Adding the Audit Finding grid check

As a backstop to the schemas, adding in the allowed grid from the UG.
Makes sure the Y/N combos are allowed.

* Minor cleanups/removals

Comments and printlns that made it through.

* Bringing in workbook E2E code.

* Adds workbooks to validate with, tests

This now tests many workbooks.

It runs them through the new generic importer and the JSON validator.

If they validate, it is good. If not, it fails.

Next is to add explicit failing tests.

* Adding a failure test

This walks workbooks that are constructed to fail.

The test runs all of them, and counts the failures.

It should be the case that every workbook in the directory fails.
Therefore, we count the workbooks and count the failures.

If they come out different, clearly, we did not fail everywhere we
expected.

* Confirmed fails correctly with... correct wb

Placing a correct workbook in the directory does the job.

* Adding a breadcrumb

The naming in the fail directory structure matters.

* Adding another breadcrumb.

* Removing print statements

* Adds full check paths everywhere

This adds the full check path everywhere.

It also adds a new failure check for CAP, to make sure the general

validations are running at the start.

* Linting.

* More linting.

* Linting.

* These are needed for tests in the tree.

The linter says they have to go.

* Removing more prints

There's one I can't find.

Also, I'm not going to be able to satisfy the linter. I give up for now.

* Some unit tests, removing an unused function

From some tests off to the side earlier.

* Linting.

My version of black does not match the team's.

Our docs do not make clear to me how I should be running the linter so
as to match the online environment.

* This runs end-to-end with cross-val

I didn't realize cross-val was baked into the `sac` object.

This now runs cross-validation on the workbooks when the E2E code is
run.

* Trying to work into unit tests

Can't quite, because it wants to write to the real DB (vs. a mocked DB).

For now, this will have to be a future thing.

* Updates from intial review.

Expanding variable names, adding comments.

TBD some more unit tests.

* Fixing test books.
1

* Fixing error introduced through simplification

Forgot that the *range* is needed for error message construction, not
just the *values* from the range.

* Fixed.

* Removing a period from an error message.

* Updated workbook versions and updated federalAwards template formula in column J to prevent endless non-zero values

* Linting

* Necessary change to prevent award reference like this AWARD-0001-EXTRA to pass

* Necessary change to prevent check_sequential_award_numbers from cratching

* More linting

* Linting ...

* Code update to make mypy happy

* Adding a new transform for the EINs

They all need to be strings.

* #2499 Added Wrap Text and Updated Workbook Version and Removed seq_number from NotesToSefa

* #2499 Updated code to set row height

* #2499 Updated row height in workbook templates

* #2499 Removed obsolete code

* #2499 seq_number was removed from NotesToSefa section by mistake and was reverted back via this commit

* #2499 Renamed AdditionalNotes sheet and added patch to ensure backwards compatibility with workbook templates 1.0.0 and 1.0.1

* #2499 Updated output

---------

Co-authored-by: Matt Jadud <[email protected]>
@github-merge-queue github-merge-queue bot temporarily deployed to production October 20, 2023 15:28 Inactive
@github-merge-queue github-merge-queue bot temporarily deployed to staging October 20, 2023 15:28 Inactive
@github-merge-queue github-merge-queue bot temporarily deployed to meta October 20, 2023 15:46 Inactive
@github-merge-queue github-merge-queue bot temporarily deployed to dev October 20, 2023 15:46 Inactive
@github-merge-queue github-merge-queue bot temporarily deployed to dev October 20, 2023 15:47 Inactive
@sambodeme sambodeme closed this Oct 20, 2023
@sambodeme sambodeme reopened this Oct 20, 2023
@sambodeme sambodeme temporarily deployed to production October 20, 2023 15:52 — with GitHub Actions Inactive
@sambodeme sambodeme temporarily deployed to staging October 20, 2023 15:52 — with GitHub Actions Inactive
@sambodeme sambodeme merged commit ad0d4ff into prod Oct 20, 2023
40 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants