You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am writing 1,000,000 chunks at the rate of 200,000 per hour (mantaray with 600,000 forks). This use case produces high disk write that may have bad side effects such as degraded performance or an impact on SSD lifetime. Based on the screenshot number 2 below, the culprit seems to be the stamperstore writes. Network bandwidth is negligible compared to the disk write bandwidth.
Expected behavior
I have no knowledge on what exactly happens under the hood. My naive assumption is that for each stamping, the corresponding ~2MB ldb gets overwritten, but I may be completely off here.
In case of write-heavy operations like this, in-memory caching with periodic persists to the disk would likely improve the situation. (stamperstore ≤ 500MB on my machine™)
Actual behavior
Please see screenshots above. Disk write frequently surpasses 100MB/s.
Steps to reproduce
On demand I can create a small JS project that creates and writes many chunks in an endless loop.
Possible solution
Please see expected behaviour.
The text was updated successfully, but these errors were encountered:
Can you describe what the upload sizes are? and which api endpoint are you using?
I was uploading the wikipedia, and at this point I was writing the mantaray forks. Many small /bytes endpoint uploads. Probably minimal time spent splitting/chunking, and most time spent stamping, which turned out to have an I/O bottleneck.
Sorry, I know this isn't much, I will eventually get back to larger data uploads where I can investigate this better.
Context
Windows 10 AMD64, Bee 2.2.0
Summary
I am writing 1,000,000 chunks at the rate of 200,000 per hour (mantaray with 600,000 forks). This use case produces high disk write that may have bad side effects such as degraded performance or an impact on SSD lifetime. Based on the screenshot number 2 below, the culprit seems to be the stamperstore writes. Network bandwidth is negligible compared to the disk write bandwidth.
Expected behavior
I have no knowledge on what exactly happens under the hood. My naive assumption is that for each stamping, the corresponding ~2MB ldb gets overwritten, but I may be completely off here.
In case of write-heavy operations like this, in-memory caching with periodic persists to the disk would likely improve the situation. (stamperstore ≤ 500MB on my machine™)
Actual behavior
Please see screenshots above. Disk write frequently surpasses 100MB/s.
Steps to reproduce
On demand I can create a small JS project that creates and writes many chunks in an endless loop.
Possible solution
Please see expected behaviour.
The text was updated successfully, but these errors were encountered: