-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce claimtrie temp allocs. #51
base: master
Are you sure you want to change the base?
Conversation
Return []Changes and a "closer" func to return them to Pool. Update tests for LoadChanges new return values.
…tring entails lots of temporary allocations.
…mPool when Node refcnt drops to zero.
Pull Request Test Coverage Report for Build 2450737946
💛 - Coveralls |
The tuning doesn't work out, both sync time and startup time got worse. Sync timeBefore (pr46):
After:
(Re)startup timeBefore (pr46):
After:
|
Wow. 10 minutes for claimtrie build. :-( Something's not right.... That's so bad that it makes me suspect that there was some extra load on the machine at some point. Can you rerun it with profile collection? Just the restart case at first to see whether 10min is reproducible. ./lbcd --memprofile=./after.mem --cpuprofile=./after.cpu When I run it on an 8core arm64 Mac mini, it continues to be about 3 min (before) and 2.2min (after) for claimtrie build. Before:
After:
|
Hm.. a re-run does perform closer to what it was. But no significant changes though.
Attach the profiled data. Does it show improvements on the sync time in your environment? |
Yes, it's showing claimtrie build time improvement (2min after, 3min before), but not overall sync time AFAICT. It takes me many days to sync from a remote node. Here is my run: I tried to get a comparable one with 317s wall clock runtime. CPU profiles are very different. Yours shows a lot more CPU time consumed in GC, makeNameHashNext func1 and func2. The descendants of sstable Reader.readBlock() are very different. Some stuff like Sha256 does not even show up in mine under the default pprof options. My hardware specs (16GB RAM 2TB SSD variant): My guess is you've reached some hardware limitation (RAM throughput/latency?), so demonstrating improvement becomes impossible. Just like I can't see any improvement in overall sync time because fetching blocks from a remote node is the limiting factor. I assume you are syncing blocks over LAN. What is your test hardware? SSD? CPUs? RAM type? What about isolation from other load? Is this a virtual machine? |
Regarding sha256 performance discrepancy: The performance you get depends on availability of special CPU instructions. go tool pprof --svg --show_from sha256 XXX.cpu |
Focusing on GC scanobject: go tool pprof --svg --show_from scanobject XXX.cpu I don't see anything obviously architecture-specific. But yours is registering a lot more time spent. The mem allocs report is ~28GB for yours and 26GB for mine, so GC should be triggering about the same number of times. |
Thanks for your contributions @moodyjon! Can you send me an email at [email protected]? You're welcome to stay anonymous if that's your preference. |
Demote to draft status as this is a sensitive code area. There is an outstanding bug being investigated at this time: |
Unbundled from PR #43
See that for usage/testing.
Based on work by @BrannonKing in mem_pressure_try_2 and reduce_memory_pressure.
The commits 8482890 and 787098e are newly added to claw back costs from node cache and node.Clone().