You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm loading BigWig data for the whole genome, i.e., calling getFeatures for each chromosome. This results in many fetch requests, as expected.
However, the number of requests seems to be excessive (76), and many of them hit exactly the same range. For example, in the above case (https://genomespy.app/docs/grammar/data/lazy/#example_1), there are 25 requests hitting the same 49 byte range and other 25 requests hitting a same 8197 range. Because web browsers seem to be very bad at caching partial content, this results in quite a bit of latency.
There appears to be a caching mechanism in BlockView, but a new BlockView (and cache) is created for each getFeatures call.
Instead of having a new cache for each BlockView, could there be a single shared cache in the BBI class, which could be used by all BlockViews? In my example case, the number of requests would drop from 76 to 28.
I could make a PR at some point if this change is feasible and would not cause any undesired adverse effects.
The text was updated successfully, but these errors were encountered:
I would definitely be open to improvements here. We use https://github.com/rbuels/http-range-fetcher which smooths over some issues like this (it is a special fetch implementation that tries to combine multiple fetch requests and cache results, it was especially useful for cram-js iirc) but would be interested in making the default experience better too
Hi,
I'm loading BigWig data for the whole genome, i.e., calling
getFeatures
for each chromosome. This results in many fetch requests, as expected.However, the number of requests seems to be excessive (76), and many of them hit exactly the same range. For example, in the above case (https://genomespy.app/docs/grammar/data/lazy/#example_1), there are 25 requests hitting the same 49 byte range and other 25 requests hitting a same 8197 range. Because web browsers seem to be very bad at caching partial content, this results in quite a bit of latency.
There appears to be a caching mechanism in
BlockView
, but a new BlockView (and cache) is created for eachgetFeatures
call.bbi-js/src/block-view.ts
Line 158 in d239d40
Instead of having a new cache for each
BlockView
, could there be a single shared cache in theBBI
class, which could be used by all BlockViews? In my example case, the number of requests would drop from 76 to 28.I could make a PR at some point if this change is feasible and would not cause any undesired adverse effects.
The text was updated successfully, but these errors were encountered: