-
Notifications
You must be signed in to change notification settings - Fork 391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
requestResponseMessage
handlers optimization
#651
base: main
Are you sure you want to change the base?
requestResponseMessage
handlers optimization
#651
Conversation
Instead of creating a new function per call.
} | ||
const resolver = messageResolvers.get(data.id); | ||
if (resolver) { | ||
resolver(data); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would benefit from a try/finally to ensure that you always delete the processed ID.
@Tsury 👏 What is different between this PR and my PR? The only different I notice is that you use a global map instance whereas I scoped the map instance to the wrapped created Proxy. One other approach I explored was only making one map per endpoint instance: foxglove@be58259 I wonder if your usage pattern is such that you are making lots of It would be good to understand the nuances and also whether it seems "safe" to have one global map. One gotcha that comes to mind with the global map (and possibly even the local map) is holding onto resolve functions after an endpoint is torn down (potentially via .terminate() calls). The current behavior with "addEventListener" will teardown all of the handlers when the endpoint itself is gone since that's managed by the underlying system but with manual listener management more care needs to be taken or at least we need to think through whether it is possible to fail in such a way. |
👋 I'm actually not entirely sure. Maybe the scoped usage just moves the optimization issue from the function creation to the map creation? EDIT: Maybe it's the fact that you create a different function for every created proxy, and I reuse the same one? And that's probably one of the upsides for the global map usage - it doesn't require additional context in the form of an anonymous function.
So this exact commit is a working, different approach? I can test it once I get back home to see how it performs. EDIT: I tested foxglove@be58259 seems like it's on par with my code performance-wise, seems like we have two approaches that fix this performance issue. I guess the local option should be safer to use.
Not at all, just 3
I agree. We need to ensure that the managed approach does not introduce any issues due to dangling/missing listeners. I'll try to think of nuanced scenarios to test, appreciate any help with this - the more people tackling this the better. |
Thank you @defunctzombie and @Tsury for looking into this. Our project has ComLink bottlenecks in event listeners being added/removed all the time. @surma @benjamind is it something that you might be interested in optimizing? |
I have not forgotten about this and we at Foxglove are looking at benchmarking this and working up an implementation that will meet the desired criteria. |
I have compared the three different versions (this PR, #649 and foxglove@be58259) with the current released comlink version and I can confirm that performance is overall better. foxglove@be58259d8 perform best, mainly due to the simplification of how the random request id is generated. Without that, it is basically the same as this PR. For all three versions, the speedup is especially noticeable when increasing the number of parallel requests (-> more pending listeners). If there is mostly just one request at a time, then the performance improvement is only marginal. The table below shows the median time in ms for repeatedly (10000 times) calling a void function exposed by the worker:
Note that for #649 it makes a huge difference if the function proxy is cached like shown below: const obj = Comlink.wrap(worker);
const someFunctionProxy = obj.someFunction;
const numParallelRequest = 5;
for (let i = 0; i < 10_000; i++) {
const promises = new Array(numParallelRequest).fill().map(() => someFunctionProxy());
await Promise.all(promises);
} If that is not done, a new proxy is created every time by calling To move forward, I would suggest to
Update: I noticed that all three PRs have memory issues in that they keep a reference to temporarily created workers e.g. when doing the following in a onClick event callback: const worker = new Worker("worker.js");
const obj = comlink.wrap<WorkerInterface>(worker);
obj.foo()
worker.terminate(); I will submit a PR which fixes this. |
I initially opened it under #650 but then closed it after seeing @defunctzombie's #649 appearing 1 hour earlier (what are the odds?!). I then tested #649 and I fear that it does not improve performance, at least not against my tests.
My test (which is based on a real scenario I'm handling) sends approx 125k messages in a very short period. When I try it with either vanilla or #649, it starts by handling some 1-2k messages and then grinds to a crawl, practically never finishing. My PR processes everything in ~25 seconds.
When lowreing to ~72k messages, vanilla takes ~55 seconds to complete, #649 takes ~70 seconds, mine takes ~15 seconds.
-- ORIGINAL PR MSG--
As per #647, I followed @josephrocca's advice.
Now there's one central function that assigns handlers from a map.
I wasn't 100% sure where to add/remove the event listeners, open for suggestions.
My tests showed a ~20% performance increase in a mass-messages scenario, YMMV.