Skip to content

Commit

Permalink
[Relay Mining] Relay Mining math helpers (#549)
Browse files Browse the repository at this point in the history
Baseline for the math behind relay mining. Adding the following two files:

```bash
x/tokenomics/keeper/update_relay_mining_difficulty.go
x/tokenomics/keeper/update_relay_mining_difficulty_test.go
```

Also adding an experimental `.prompts` file based on this
- https://docs.continue.dev/walkthroughs/prompt-files
- https://marketplace.visualstudio.com/items?itemName=Continue.continue


---

Co-authored-by: Redouane Lakrache <[email protected]>
  • Loading branch information
Olshansk and red-0ne authored May 28, 2024
1 parent 9d089e7 commit 6f2fe5b
Show file tree
Hide file tree
Showing 6 changed files with 502 additions and 26 deletions.
15 changes: 15 additions & 0 deletions .prompts/test_suggestions.prompt
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
temperature: 0.5
maxTokens: 4096
---
<system>
You are a principal software engineer.
</system>

{{{ input }}}

You have been provided two files:
1. A source code file
2. A unit test file

Please provide a list of unit test names/descriptions that you believe
are still missing. Do not actually implement them.
17 changes: 17 additions & 0 deletions .prompts/unit_tests.prompt
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
temperature: 0.5
maxTokens: 4096
---
<system>
You are a principal software engineer.
</system>

{{{ input }}}

Write unit tests for the code selected., following each of these instructions:
- Follow the best and latests practices of golang
- Where appropriate use `github.com/stretchr/testify/require`
- If necessary, create a tests struct to iterate over: `tests := []struct {
- Include at least 2 edge cases and 5 core cases
- The tests should be complete and sophisticated
- Give the tests just as chat output, don't edit any file
- Just give the code, no need for an explanation
18 changes: 10 additions & 8 deletions e2e/tests/update_params.feature
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
Feature: Params Namespace
# TODO_DOCUMENT(@Olshansk): Document all of the on-chain governance parameters.

Scenario: An unauthorized user cannot update a module params
Given the user has the pocketd binary installed
And all "tokenomics" module params are set to their default values
And an authz grant from the "gov" "module" account to the "pnf" "user" account for the "/poktroll.tokenomics.MsgUpdateParams" message exists
When the "unauthorized" account sends an authz exec message to update all "tokenomics" module params
| name | value | type |
| compute_units_to_tokens_multiplier | 666 | int64 |
Then all "tokenomics" module params should be set to their default values
Background:

Scenario: An unauthorized user cannot update a module params
Given the user has the pocketd binary installed
And all "tokenomics" module params are set to their default values
And an authz grant from the "gov" "module" account to the "pnf" "user" account for the "/poktroll.tokenomics.MsgUpdateParams" message exists
When the "unauthorized" account sends an authz exec message to update all "tokenomics" module params
| name | value | type |
| compute_units_to_tokens_multiplier | 666 | int64 |
Then all "tokenomics" module params should be set to their default values

# NB: If you are reading this and the tokenomics module has parameters
# that are not being updated in this test, please update the test.
Expand Down
153 changes: 153 additions & 0 deletions x/tokenomics/keeper/update_relay_mining_difficulty.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
package keeper

import (
"context"
"fmt"
"math"

sdk "github.com/cosmos/cosmos-sdk/types"

proofkeeper "github.com/pokt-network/poktroll/x/proof/keeper"
prooftypes "github.com/pokt-network/poktroll/x/proof/types"
"github.com/pokt-network/poktroll/x/tokenomics/types"
)

const (
// Exponential moving average (ema) smoothing factor, commonly known as alpha.
// Usually, alpha = 2 / (N+1), where N is the number of periods.
// Large alpha -> more weight on recent data; less smoothing and fast response.
// Small alpha -> more weight on past data; more smoothing and slow response.
emaSmoothingFactor = float64(0.1)

// The target number of relays we want the network to mine for a specific
// service across all applications & suppliers per session.
// This number determines the total number of leafs to be created across in
// the off-chain SMTs, across all suppliers, for each service.
// It indirectly drives the off-chain resource requirements of the network
// in additional to playing a critical role in Relay Mining.
// TODO_UPNEXT(#542, @Olshansk): Make this a governance parameter.
TargetNumRelays = uint64(10e4)
)

// UpdateRelayMiningDifficulty updates the on-chain relay mining difficulty
// based on the amount of on-chain relays for each service, given a map of serviceId->numRelays.
func (k Keeper) UpdateRelayMiningDifficulty(
ctx context.Context,
relaysPerServiceMap map[string]uint64,
) error {
logger := k.Logger().With("method", "UpdateRelayMiningDifficulty")
sdkCtx := sdk.UnwrapSDKContext(ctx)

for serviceId, numRelays := range relaysPerServiceMap {
prevDifficulty, found := k.GetRelayMiningDifficulty(ctx, serviceId)
if !found {
types.ErrTokenomicsMissingRelayMiningDifficulty.Wrapf("No previous relay mining difficulty found for service %s. Initializing with default difficulty %v", serviceId, prevDifficulty.TargetHash)
// If a previous difficulty for the service is not found, we initialize
// it with a default.
prevDifficulty = types.RelayMiningDifficulty{
ServiceId: serviceId,
BlockHeight: sdkCtx.BlockHeight(),
NumRelaysEma: numRelays,
TargetHash: defaultDifficultyTargetHash(),
}
}

// TODO_CONSIDERATION: We could potentially compute the smoothing factor
// using a common formula, such as alpha = 2 / (N+1), where N is the number
// of periods.
// N := ctx.BlockHeight() - prevDifficulty.BlockHeight
// alpha := 2 / (1 + N)
alpha := emaSmoothingFactor

// Compute the updated EMA of the number of relays.
prevRelaysEma := prevDifficulty.NumRelaysEma
newRelaysEma := computeEma(alpha, prevRelaysEma, numRelays)
difficultyHash := ComputeNewDifficultyTargetHash(TargetNumRelays, newRelaysEma)
newDifficulty := types.RelayMiningDifficulty{
ServiceId: serviceId,
BlockHeight: sdkCtx.BlockHeight(),
NumRelaysEma: newRelaysEma,
TargetHash: difficultyHash,
}
k.SetRelayMiningDifficulty(ctx, newDifficulty)

// TODO_UPNEXT(#542, @Olshansk): Emit an event for the updated difficulty.
logger.Info(fmt.Sprintf("Updated relay mining difficulty for service %s at height %d from %v to %v", serviceId, sdkCtx.BlockHeight(), prevDifficulty.TargetHash, newDifficulty.TargetHash))

}
return nil
}

// ComputeNewDifficultyTargetHash computes the new difficulty target hash based
// on the target number of relays we want the network to mine and the new EMA of
// the number of relays.
// NB: Exported for testing purposes only.
func ComputeNewDifficultyTargetHash(targetNumRelays, newRelaysEma uint64) []byte {
// The target number of relays we want the network to mine is greater than
// the actual on-chain relays, so we don't need to scale to anything above
// the default.
if targetNumRelays > newRelaysEma {
return defaultDifficultyTargetHash()
}

log2 := func(x float64) float64 {
return math.Log(x) / math.Ln2
}

// We are dealing with a bitwise binary distribution, and are trying to convert
// the proportion of an off-chain relay (i.e. relayEMA) to an
// on-chain relay (i.e. target) based on the probability of x leading zeros
// in the target hash.
//
// In other words, the probability of an off-chain relay moving into the tree
// should equal (approximately) the probability of having x leading zeroes
// in the target hash.
//
// The construction is as follows:
// (0.5)^num_leading_zeroes = (num_target_relay / num_total_relays)
// (0.5)^x = (T/R)
// x = -ln2(T/R)
numLeadingZeroBits := int(-log2(float64(targetNumRelays) / float64(newRelaysEma)))
numBytes := proofkeeper.SmtSpec.PathHasherSize()
return LeadingZeroBitsToTargetDifficultyHash(numLeadingZeroBits, numBytes)
}

// defaultDifficultyTargetHash returns the default difficulty target hash with
// the default number of leading zero bits.
func defaultDifficultyTargetHash() []byte {
numBytes := proofkeeper.SmtSpec.PathHasherSize()
numDefaultLeadingZeroBits := int(prooftypes.DefaultMinRelayDifficultyBits)
return LeadingZeroBitsToTargetDifficultyHash(numDefaultLeadingZeroBits, numBytes)
}

// computeEma computes the EMA at time t, given the EMA at time t-1, the raw
// data revealed at time t, and the smoothing factor α.
// Src: https://en.wikipedia.org/wiki/Exponential_smoothing
func computeEma(alpha float64, prevEma, currValue uint64) uint64 {
return uint64(alpha*float64(currValue) + (1-alpha)*float64(prevEma))
}

// LeadingZeroBitsToTargetDifficultyHash generates a slice of bytes with the specified number of leading zero bits
// NB: Exported for testing purposes only.
func LeadingZeroBitsToTargetDifficultyHash(numLeadingZeroBits int, numBytes int) []byte {
targetDifficultyHah := make([]byte, numBytes)

// Set everything to 1s initially
for i := range targetDifficultyHah {
targetDifficultyHah[i] = 0xff
}

// Set full zero bytes
fullZeroBytes := numLeadingZeroBits / 8
for i := 0; i < fullZeroBytes; i++ {
targetDifficultyHah[i] = 0
}

// Set remaining bits in the next byte
remainingZeroBits := numLeadingZeroBits % 8
if remainingZeroBits > 0 {
targetDifficultyHah[fullZeroBytes] = byte(0xff >> remainingZeroBits)
}

return targetDifficultyHah
}
Loading

0 comments on commit 6f2fe5b

Please sign in to comment.