Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPIKE] Testing Scaffolded UI components #421

Open
Tracked by #401
c12i opened this issue Nov 28, 2024 · 3 comments
Open
Tracked by #401

[SPIKE] Testing Scaffolded UI components #421

c12i opened this issue Nov 28, 2024 · 3 comments

Comments

@c12i
Copy link
Collaborator

c12i commented Nov 28, 2024

AC

  • Determine the best testing strategy for scaffolded UI: Test Components or the UI interactions?
  • Explore whether we might need to use libraries like cypress
@c12i c12i changed the title Implement snapshot testing for generated UI components [SPIKE] Testing Scaffolded UI components Nov 28, 2024
@c12i c12i added this to Holochain Nov 28, 2024
@github-project-automation github-project-automation bot moved this to Backlog in Holochain Nov 28, 2024
@c12i c12i moved this from Backlog to Ready in Holochain Nov 28, 2024
@c12i c12i removed the status in Holochain Nov 28, 2024
@c12i c12i moved this to Ready in Holochain Nov 28, 2024
@c12i c12i moved this from Ready to Ready for refinement in Holochain Nov 28, 2024
@c12i
Copy link
Collaborator Author

c12i commented Dec 11, 2024

Hey @ThetaSinner and @mattyg, some follow up on this discussion.

Here's my rationale for suggesting snapshot tests in #401.

The scaffolding tool generates UI code primarily to accelerate prototyping, and from what I have observed, most hApp devs will usually end up deleting or modifying this code when building production hApps, it's why there have been initiatives to reduce the opinionatedness of the tool with regards to the kind of dependencies it initially comes with and go for much leaner and minimalistic approach that allows you to easily remove unwanted ui / styles. Or even go a step further and skip ui generation altogether with the --no-ui flag.

I think ensuring that the generated UI that gets generated is a great idea, and will save us a lot of time with manual testing, but if we specifically choose to include this test code as part of what the hApp dev ends up getting, it ends up potentially being more code for them to delete. We had a lengthy discussion about this in the scaffolding MM channel, and I believe we came to a consensus that we should be a little less opinionated and avoid making too much UI decisions on behalf of the hApp dev.

The goal of the snapshot tests idea was to find a pragmatic balance, one that lets us validate one thing at the core, that scaffolding is generating the right code. And coupled with the cli tests proposed in #401, that the tool works correctly without imposing any overhead on hApp devs (actually two things now 😅). I believe the most important code that the scaffolding tool generates, the zome functions, already gets well tested with vitest and tryorama and the hApp dev gets presented with these tests, which is great already and offers the hApp dev the opportunity to see tryorama in action, a great way for them to learn how to write tests of their own or extend the existing ones.

The snapshot tests then would be more of internal validation and quality assurance for the scaffolding tool maintenance as these snapshots would not end up in the generated code for the hApp dev.

There is still a case for testing that the UI components work though, I am not entirely dismissing the idea behind this. And we could find a workaround that ensures the tests don't necessarily have to be generated along with the boilerplate ui code. But I think this would be quite unnecessary at the end of the day (at least for now). The generated UI code serves primarily as a prototyping starting point, and many developers typically customize or replace it for production use as I mentioned before. Implementing extensive component testing would be disproportionate to its intended temporary nature and rapid prototyping purpose.

I am curious to hear your thoughts on this.

@mattyg
Copy link
Member

mattyg commented Dec 11, 2024

The snapshot tests then would be more of internal validation and quality assurance for the scaffolding tool maintenance as these snapshots would not end up in the generated code for the hApp dev.

I think that snapshot tests just validate that the generated UI code has not changed. It does not validate that the generated UI code is functioning properly.

If the UI code does change as part of a PR (which will happen sometimes and is fine), then that PR will also include a change to the snapshot assertion. Then when the snapshot tests are run in CI, they will pass, even if the change to the UI code breaks its functionality.

I think the ideal goal here is to verify in CI that generated UI code is functioning properly. The original issue was that a change accidentally caused the UI directory to be not generated at all, right? If we want a quick-and-dirty CI check to avoid this, then maybe we just add a step to CI to confirm the UI directory exists? This could be a short-term holdover to avoid that particular issue arising again.

Or if we don't want to prioritize generating e2e / ui tests, another simpler holdover could be to create a fixed e2e/ui test suite that runs against the scaffolded app that gets created in CI?

There is still a case for testing that the UI components work though, I am not entirely dismissing the idea behind this. And we could find a workaround that ensures the tests don't necessarily have to be generated along with the boilerplate ui code. But I think this would be quite unnecessary at the end of the day (at least for now). The generated UI code serves primarily as a prototyping starting point, and many developers typically customize or replace it for production use as I mentioned before. Implementing extensive component testing would be disproportionate to its intended temporary nature and rapid prototyping purpose.

I would personally love having an e2e/ui tests (and particularly the test harness setup) as part of the generated code. I do end up replacing parts of the UI, but setting up the test harness can be a pain, and having examples in the repo already to copy when writing your own tests is really nice. Though I can see that it may not be worth the cost-benefit right now.

@ThetaSinner
Copy link
Member

ThetaSinner commented Dec 20, 2024

On being opinionated. I generally agree and if Material UI is included then removing that would be one of the first things I do. Yes I generally work with Vue and I do make some significant changes to the UI code because I like the composition API instead of the options API and I add Pinia but generally I'm very happy with what's there. I don't expect the scaffolded UI to exactly match what I want but I think what's there is a sensible default to start from.

I agree with @mattyg that if there were tests provided that showed how to stub the @holochain/client and write some basic tests then I'd be happy with that and even if I adapted them a bit, I would almost definitely keep them. Literally as basic as this:

 it('render', async () => {
    const wrapper = mount(App, {
      props: {
        client
      }
    })

    await flushPromises()

    expect(wrapper.html()).toContain('Welcome, hApp user')
  })

Which is a literal smoke test from an Vue app of mine, minus a couple of lines of code that are specific to my app. It has some setup and teardown steps that mock out my HTTP client with canned responses and I'm good to go. That is the essence of all the tests in that project and it was a pain to get started with. I'd probably stick with it if I was given a working testing setup. If one day we had a proper stub for the @holochain/client then those tests would become an even bigger value add.

I understand the work involved is higher even to get a basic version of this working across several UI frameworks but it does not need to be detailed tests against every component I don't think. Even if it was just a case of having some code that can do the edit to include the "create" and "collection" UI elements for a single entry type and then writing one test against the main app with those rendered then I think that'd be a pretty solid smoke test that the generated UI is functional.

I've said already that I have a really strong reaction to snapshot tests because I've seen how badly wrong they can go. What I'm concerned about is something like a change to how imports are ordered in TypeScript code happening at the same time as a regression is introduced into scaffolding that makes a small number of files not get generated. Then the author of a change and the reviewer have hundreds of snapshot changes to look through and it'd be so easy to miss one switching to "empty file". I've seen it happen and I've seen an entire project get scrapped because of it. My opinion is that snapshots are acceptable if they are REALLY specific and can never produce a diff that's hard/impossible to review. They can even be very useful at that scale because sometimes asserting every little detail in structured data isn't practical. To me, diffing a file tree against a snapshot falls on the "hard to review" side of the line. I'm not saying anything bad will necessarily happen to scaffolding because of these tests but I do think it'd be easier for regressions to slip through with full snapshot tests than UI tests.

That doesn't even mean that snapshots shouldn't used. For example, before you run something like cd ui && npm t to check that the UI tests pass, you'd really need to check that the package.json exists and has the right content. Using a snapshot test just against something like the package.json to make sure the test command isn't some empty command that will just pass when the UI tests are run, would be entirely valid. That makes good use of snapshot testing because the diffs to a <100 line JSON file should be entirely reviewable. Then the actual checking of the UI code being functional can be left to unit tests.

I'm totally in support of CLI tests. I think that's a great idea to automate the CLI in the way the users use it and then to check the output somehow. It's really just the testing methodology that we apply to checking the output that I'm wanting to discuss.

Anyhow, you asked for my thoughts, here they are :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Ready for refinement
Development

No branches or pull requests

3 participants