You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pruning point proof uses block levels in order to allow polylogarithmic history size (aka Mining in Logarithmic Space (MLS)).
In order to support variable difficulty, we used max_target rather then target (see the following research issue for extended discussion). However, this means that all blocks mined under current difficulty target belong to multiple levels (since target << max_target), which creates a large overhead in data management and computation.
In the recent network HF we removed 31 redundant levels by reducing max_target from 2256 to 2225. However still, the number of redundant identical levels is now at ~8 = log(256) (since the network hashrate grew by a factor of ~256 since the HF) and it continues to grow logarithmically with global hashrate.
Possible solutions for discussion:
Divide to block levels by looking at the number of zeros in the other side of the hash (the side not affected by difficulty target). It is not clear how this should be combined with target and how this affects the security of the pruning proof when comparing two competing syncers.
Optimize on the engineering level. For instance by (i) identifying the identical levels and avoiding duplication or (ii) finding ways to use services such as reachability manager for all levels at once.
Edit: 1. is not a viable solution from the theory perspective. Super blocks must represent absolute difficulty. As to 2. we indeed implemented the reachability optimization (and extended it in rust), however header structure itself can be easily optimized and compressed. Additionally, the relation stores can be optimized as well
The text was updated successfully, but these errors were encountered:
Pruning point proof uses block levels in order to allow polylogarithmic history size (aka Mining in Logarithmic Space (MLS)).
In order to support variable difficulty, we used
max_target
rather thentarget
(see the following research issue for extended discussion). However, this means that all blocks mined under current difficulty target belong to multiple levels (sincetarget << max_target
), which creates a large overhead in data management and computation.In the recent network HF we removed
31
redundant levels by reducingmax_target
from 2256 to 2225. However still, the number of redundant identical levels is now at~8 = log(256)
(since the network hashrate grew by a factor of~256
since the HF) and it continues to grow logarithmically with global hashrate.Possible solutions for discussion:
target
and how this affects the security of the pruning proof when comparing two competing syncers.Edit: 1. is not a viable solution from the theory perspective. Super blocks must represent absolute difficulty. As to 2. we indeed implemented the reachability optimization (and extended it in rust), however header structure itself can be easily optimized and compressed. Additionally, the relation stores can be optimized as well
The text was updated successfully, but these errors were encountered: