Replies: 10 comments 19 replies
-
Thank you for starting this discussion @m7pr I have previously raised this question, and I comprehend the rationale behind the current configuration, which aligns with the points you have already highlighted (i.e., avoiding development slowdown). It's possible that GHA was previously enabled on all PRs, and a decision was made to configure it in this specific manner. However, I also understand the motivation of this discussion.
I find this approach particularly intriguing and see it as a viable middle ground compared to enabling GHA for all pull requests (PRs). @cicdguy is this possible to do? If so and the team is in agreement, I am proposing that we try this for one increment and then we can reconvene for lesson learned and/or further discussion.
I've done this before using a tool called act. It's definitely possible but requires a local docker setup. I agree, probably not sustainable for long run. |
Beta Was this translation helpful? Give feedback.
-
This is doable if we extend a list of branches in builds specs, like those 2 lines teal.slice/.github/workflows/check.yaml Lines 11 to 12 in c8e94af teal.slice/.github/workflows/docs.yaml Lines 23 to 24 in c8e94af should have a newline following it with |
Beta Was this translation helpful? Give feedback.
-
@donyunardi besides automated testing I'm also interested in the manual flow of testing. I'd like us to come up with few bullet points on how to run and record manual tests for reviewers. |
Beta Was this translation helpful? Give feedback.
-
@m7pr I have a few suggestions for manual testing. here are my thoughts Test Plan:(for reviewer)
Test Environment Setup: (this vary helpful write some additional pieces to setup correct environment , specially for newcomers)
Test Scenarios:
Video Demonstrations: (I really liked idea of small demo presentation of the developer testing the feature and bug)
Expected Results:
Edge Case Coverage:
Shinytest2 Integration:
|
Beta Was this translation helpful? Give feedback.
-
added a PR to extend builds also for filter_panel_refactor #318 |
Beta Was this translation helpful? Give feedback.
-
There are two requirements before merging: checks and at least one approval. Well, three, the third being no change requests, thought the latter can be waived. The point is there is a responsibility placed on the reviewer. By custom, when we put up a PR we describe the intended behavior and the actions that we explicitly wish reviewers to test on their machines, but we also count on the reviewer to try to creatively break the app in other ways. I understand there have been attempts at introducing |
Beta Was this translation helpful? Give feedback.
-
Thanks everyone for the input! @pawelru @chlebowa what do you think about creating a task/github_issue where we outline manual test cases for the basic-teal app? So that we have at least one app, with few statements like:
and many more. If we get an exhausted list of 20-30 test cases it is easier for testers and future users to ensure nothing is omitted. Also this list can be reshaped later into automated tests with shinytest2! But to know what we want to test with shinytest2 it would be great to have a list of assumptions to be tested first. Lastly this list should be reviewed if it's up to date during every PR - I can imagine some assumptions gets removed during breaking changes. This should be updated the same as regular backend tests are updated. Just wanted to get your thoughts on it. |
Beta Was this translation helpful? Give feedback.
-
For the shinytest2 use cases I followed
@cicdguy do you remember if you run shinytest2 before in GHA builds and what was the issue that caused we could not set it up? |
Beta Was this translation helpful? Give feedback.
-
@cicdguy since we've got you here - would it be possible to setup github commands (like this on tidyverse/dplyr) so that if someone types e.g. |
Beta Was this translation helpful? Give feedback.
-
Hey all, to summarize the discussion
|
Beta Was this translation helpful? Give feedback.
-
Motivation
For sure tests are painful and they slow the development, however if we ignore them for too long (I am not saying we do) we could create a problematic technical debt in the future. Tests force an author of PR to execute the code at least one time to make his/her PR reliable, but only if they are a part of the flow.
When we test?
Currently we have a thorough setup of GitHub Actions that gets triggered on each commit to a PR that is provided directly to
main
branch. Unfortunately, in some situations, ale like for the past couple of weeks, other branch branch become the new main branch (talking aboutfilter_panel_refactor
branch) and automated GitHub Actions tests are not trigged on recent PRs.This can result in a PR being merged to
filter_panel_refactor
without any global GitHub Actions test, just to realize tests are not passing onfilter_panel_refactor
(that has a PR tomain
) seconds after the merge.I would like to work out a procedure in which we make sure the changes were tested at least the very same way as GitHub Actions would do. Also triggering tests on GitHub Actions enables team members to see immediately that some changes are required for a PR before it is merged. Lastly it proves the verification of the tests if the author or the PR did not provide an information on which tests has been run.
I know running tests on feature branches might not always be ideal as there is a lot of other dependencies that can also change, but for some more stable moments, where other dependencies are stable, running tests on feature branches would be crucial for code stability.
I'd love to hear you opinion on the below manner. Below are my few recommendations on how to tackle that issue:
filter_panel_refactor
and any future branches that became the new main for some timefilter_panel_refactor
and any future branches that became the new main - they are triggered manually by a click, before the author asks for the reviewHow we test?
Is GitHub Actions the only requirement for a PR to be merged? Should we also be doing manual testing? If yes, I recommend we extend CONTRIBUTING guidelines on this repo with an information on how to run a test app, and what behavior should be verified before each PR. I would recommend creating quick videos and uploading them to a PR as a part of the testing flow. Would shinytest2 take some burden out of our plate if we incorporate them in the future?
Happy to discuss and get your view on testing
Beta Was this translation helpful? Give feedback.
All reactions