You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 23, 2021. It is now read-only.
We don't have a strategy for determining whether the performance of immutable is acceptable. On the original repo, Lee did the manual work of deciding whether the performance was acceptable by pushing back on PRs that he knew to be slow.
One strategy is the "don't regress" strategy. I don't think this is a good strategy because the initial performance was somewhat arbitrary and the goals were not clearly defined. The advantage to this approach is that it's simple.
The dependability of the current bench-marking code is also not certain. The code is run in a VM context and when running bench-marking with the same exact code, it's not uncommon to see differences that exceed 10%.
We would also like to make the bench-marking automatic in CI (see #208) so that this work doesn't have to be manual and we catch problematic code before the PR gets merged.
Methuselah96
changed the title
Determine performance goals and a corresponding strategy to enforce them
Determine performance goals and a corresponding strategy to maintain them
Dec 19, 2020
One way to do it is to take a free api response (like pokemon api or something like this) which will have a real case object and from this we can test atleast fromJS and toJS use case
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The problem
We don't have a strategy for determining whether the performance of immutable is acceptable. On the original repo, Lee did the manual work of deciding whether the performance was acceptable by pushing back on PRs that he knew to be slow.
One strategy is the "don't regress" strategy. I don't think this is a good strategy because the initial performance was somewhat arbitrary and the goals were not clearly defined. The advantage to this approach is that it's simple.
The dependability of the current bench-marking code is also not certain. The code is run in a VM context and when running bench-marking with the same exact code, it's not uncommon to see differences that exceed 10%.
We would also like to make the bench-marking automatic in CI (see #208) so that this work doesn't have to be manual and we catch problematic code before the PR gets merged.
(See #210, #212, and #213 for more context.)
Brainstorming
The text was updated successfully, but these errors were encountered: