Skip to content

Commit

Permalink
update docs for latest database changes
Browse files Browse the repository at this point in the history
  • Loading branch information
SebastienGllmt committed Sep 5, 2024
1 parent 1a5b9d8 commit 5f63845
Show file tree
Hide file tree
Showing 8 changed files with 122 additions and 68 deletions.
6 changes: 3 additions & 3 deletions docs/home/0-intro/0-what-is-paima-engine.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ slug: /
Paima is a Web3 Engine optimized for games, gamification and autonomous worlds that allows building web3 applications in just days

Notably, its key features are that it
1. Allows building onchain games with web2 skills
2. Protects users even in the case of hacks allowing brands to build web3 applications without worrying
3. Enables you to deploy your game to leverage multiple chains and modular stacks at once in a single unified user and developer experience
1. Enables you to deploy your game to leverage multiple chains and modular stacks at once in a single unified user and developer experience
2. Allows building onchain games with web2 skills
3. Protects users even in the case of hacks allowing brands to build web3 applications without worrying
4. Speeds you up to make weekly releases a reality instead of most web3 games which are release-and-pray

Paima supports multiple chains out of the box, making it a fully modular rollup framework.
Expand Down
12 changes: 6 additions & 6 deletions docs/home/100-state-machine/100-define-machine/10-read-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,17 +27,17 @@ Paima works by updating your state machine whenever happens onchain - the most c
Your parser can then be used in the _stf_ (state transition function) of your application

```typescript
import type { type SubmittedChainData } from '@paima/sdk/utils';
import type { type STFSubmittedData } from '@paima/sdk/chain-types';
import type Prando from '@paima/sdk/prando';
import type { Pool } from 'pg';
import type { BlockHeader } from '@paima/sdk/utils';
import type { PreExecutionBlockHeader } from '@paima/sdk/chain-types';

export default async function (
inputData: SubmittedChainData,
blockHeader: BlockHeader,
inputData: STFSubmittedData,
blockHeader: PreExecutionBlockHeader,
randomnessGenerator: Prando,
dbConn: Pool
): Promise<SQLUpdate[]> {
): Promise<{ stateTransitions: SQLUpdate[]; events: Events }> {
console.log(inputData, 'parsing input data');
const user = inputData.userAddress.toLowerCase();

Expand All @@ -53,7 +53,7 @@ export default async function (
case 'createLobby':
// handle this input however you need (but needs to be deterministic)
default:
return [];
return { stateTransitions: [], events: [] };
}
}
```
Expand Down
66 changes: 49 additions & 17 deletions docs/home/100-state-machine/1000-structure.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,35 +19,67 @@ There are a few tricky parts to computing transaction hashes in Paima-based roll
1. **Timers IDs**: Some transactions may be initiated by [timers](./325-creating-events/50-timers-ticks.md). We differentiate timers with [precompiles](./325-creating-events/300-precompiles/100-introduction.md), but the same timer can produce the same data (ex: "reset daily leaderboard") many times, and could even generate multiple identical events in the same block. To tackle this, we include both the block number where the timer is expected to trigger as well as a unique incrementing ID to each event triggered from the same tune (`indexForEvent`) so that they all get a different final tx hash.
1. **Separating elements**: Since Paima transaction hashes need to combine multiple different inputs of different length, we use a separator `|` to separate the field.

To calculate a transaction hash, there are multiple cases that need to be handled:
To calculate a transaction hash, there two cases that need to be handled:

1. For [primitives](./300-react-to-events/10-primitive-catalogue/1-introduction.md),
1. For [for primitives](./300-react-to-events/10-primitive-catalogue/1-introduction.md) / [direct user transactions](./200-direct-write/20-write-data.md),
```js
'0x' +
keccak_256(
caip2Prefix |
data.tx_hash |
indexForEvent(data.tx_hash)
origin_tx_hash |
indexForEvent(origin_tx_hash) // to handle the fact one tx hash on the origin chain can trigger multiple STF updates on the rollup
)
```

2. For [timers](./325-creating-events/50-timers-ticks.md),
```js
'0x' +
keccak_256(
userAddress |
keccak_256(data.input_data) |
keccak_256(input_data) |
scheduledBlockHeight |
timerIndexRelativeToBlock
timerIndexRelativeToBlock // to handle the fact the same timer can trigger multiple times in the same block
),
```
```
3. For [direct user transactions](./200-direct-write/20-write-data.md),
```js
'0x' +
keccak_256(
submittedData.caip2 |
submittedData.txHash |
indexForEvent(submittedData.txHash)
)
```
## Blocks
Paima blocks have the following data type available as both `PreExecutionBlockHeader` and `PostExecutionBlockHeader` as part of `@paima/chain-types`
```ts
{
version: 1;
mainChainBlochHash: string;
blockHeight: number;
prevBlockHash: string | null;
msTimestamp: number;
successTxsHash: string;
failedTxsHash: string;
}
```

Notably, all transactions that trigger STF calls affects the block hash (which is a keccack hash of a `|`-separated combination of the above-mentioned fields. See `hashBlock` in `@paima/chain-types` for more)

Paima blocks follow a few key design decisions

### Decision 1) No Merklization of inputs

Typically, blocks contain a Merkle root of the inputs within a block, as it allows you to prove including of a transaction within a block in logarithm space/time. However, Merklization also has a performance cost, so it should only be used when needed. You can find the performance rationale for this decision [here](https://github.com/PaimaStudios/paima-engine/pull/423)

The key points are that:
1. Merklization (especially in JS) is slow
1. Checking if an input is part of a block's Merkle tree is not a common action, and if really needed you can check if the input is contained in the relevant underlying chain instead of querying this information through Paima
1. The fact that Merklization is used doesn't give a large benefit when it comes to techniques like storage proofs (ZK)

### Decision 2) Failed transactions affect the block hash

Failed transactions here refers to any transaction that made it all the way to an STF call, and then failed during the STF comptuation itself. Storing these is useful for debugging, and they do not present a trivial DOS vector as these transactions are triggered by actions on the underlying chain (which have gas costs) or things like timers (which the app developer controls).

### Decision 3) primitives that do not trigger an STF do not modify the block hash for that block

Implicit state (that do not come from explicit user inputs) typically do not modify the block hash as an industry convention (ex: many chains have implicit state like "staking rewards" that accumulate over time without being reflected in the block hash)

You can learn more about how this works in relation to primitives [here](./300-react-to-events/10-primitive-catalogue/1-introduction.md#accessing-the-collected-data)

### Decision 4) There is no genesis hash

Typically, chains have a "genesis block". However, in Paima, it's not clear what the "genesis" hash would refer to in a generic way. You can find a discussion on this point [here](https://github.com/PaimaStudios/paima-engine/issues/424)
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ sidebar_position: 2

# Primitive Catalogue

Paima, by default, can provide standard transaction types (ex: EVM transactions), but for usability it is useful to refine this raw data type into something more meaningful (ex: know it's an ERC20 transfer). These refinements acts as a sort of primitive that games can easily leverage without having to write the parsing logic themselves, and since these primitives live on the underlying chains they are composable (within that chain)
When writing an application, you often want to update your application based on common patterns (ex: token transfers). Instead of having to re-implement these patterns from scratch every time, Paima Engine can automatically do the heavy work for you via a feature called the _Primitive Catalogue_.

Paima Engine enables this by automatically doing the heavy work for you via a feature called the _Primitive Catalogue_. Primitives allow you to read data trustlessly from multiple locations such as various L1/L2s. The goal is the Primitive Catalogue is to be the Library of Alexandria of primitives necessary to build onchain games.
Primitives allow you to tap into these standards trustlessly from multiple locations (such as various L1/L2s) to either for simple accounting purposes (ex: keep track of token ownership by accounts) or for triggering more update complex logic specified by your application's state machine. The goal is the Primitive Catalogue is to be the Library of Alexandria of primitives necessary to build onchain games.

<div style={{textAlign: 'center'}}>
![](./primitive-catalogue.png)
Expand Down Expand Up @@ -45,22 +45,34 @@ If you try to run your game node with an invalid or non-existent Primitive Catal

## Accessing the collected data

Primitive data is written directly ledger state for your rollup including the underlying database. You can learn more about how to fetch the information aggregated either from your state machine or from the SQL queries by reading the documentation for the corresponding primitive.

Each extension may provide data to your game in one (or both) of the two ways below:

1. By collecting the data and saving it into your game database directly, which you can access using Paima SDK functions described in the corresponding sections;
2. By [scheduling inputs](../../325-creating-events/50-timers-ticks.md) when certain events happen, which you can then react to in your state transition function.
### *Implicit ledger state*

Some primitives work by collecting the data and saving it into your game database directly without necessarily triggering your STF directly. This is useful if you want to passively aggregate information for future use in your application (ex: keep track of user token balances) without having to write no-op STF handlers for all of them.

In this case, the data can still be access through SQL queries directly for the corresponding database, and you can also access it through Javascript with opinionated APIs through primitive-specific utility functions.

The data collected and functions used to access it are specific to each type of extension and you can find more information about that in their respective sections. In general, be aware that these functions will read directly from the game state database (which is what the `readonlyDBConn` parameter is for), and you will need to specify the extension name (which is what the `cdeName` parameter in each function is for) which needs to correspond to the name you specified in the configuration file.
Note that, given these modify implicit ledger state, these will not modify the block hash of your L2 blocks (this is industry standard, in the same way that for other blockchains things like epoch transitions are not reflected in the block hash)

Scheduled inputs are triggered by events specific to each extension type, with the circumstances and the format of the scheduled input described in their respective sections. The inputs are always scheduled either for the current blockheight (which enables them to be processed immediately, as scheduled inputs are processed before the state transition function is called), or, if they are triggered before the overall `START_BLOCKHEIGHT` of the game node (specified in the `.env` file), in the so-called _pre-sync_ phase, they are scheduled for `START_BLOCKHEIGHT + 1` (which is the first blockheight for which the state transition function is called). The scheduled inputs will always start with the prefix specified in the config as `scheduledPrefix`.
### *Explicit ledger state*

The [state transition function](../../../read-write-L2-state/read-data#stf-function) call triggered by a scheduled input originating from a Primitive can also access:
Some primitives work by creating [scheduling inputs](../../325-creating-events/50-timers-ticks.md) when certain events happen, which you can then react to in your [state transition function](../../../read-write-L2-state/read-data#stf-function).

The exact data passed to your STF depends on the extension, and you can read the documentation of each extension to learn more.

Given these primitive trigger a state transition, they are also each given a transaction hash, and the call triggered by a scheduled input originating from a Primitive can also access:
- `inputData.scheduledTxHash`: the original transaction hash that triggered this primitive
- `inputData.extensionName`: the primitive that triggered
- `inputData.extensionName`: the primitive that triggered the STF (name specified in your config file)
- `caip2`: the [caip2](https://github.com/ChainAgnostic/CAIPs/blob/main/CAIPs/caip-2.md) id of the chain that triggered the event

To learn by example, please consult the NFT LvlUp game template &ndash; `./paima-engine-linux init template nft-lvlup` to learn more.
The inputs are always scheduled either for the current blockheight (which enables them to be processed immediately, as scheduled inputs are processed before the state transition function is called), or, if they are triggered before the overall `START_BLOCKHEIGHT` of the game node (specified in the `.env` file), in the so-called _pre-sync_ phase, they are scheduled for `START_BLOCKHEIGHT + 1` (which is the first blockheight for which the state transition function is called). The scheduled inputs will always start with the prefix specified in the config as `scheduledPrefix`.

<!-- TODO: this template is deprecated -->
<!-- To learn by example, please consult the NFT LvlUp game template &ndash; `./paima-engine-linux init template nft-lvlup` to learn more. -->

## Relation to funnels

Paima [funnels](../3-funnel-types/1-common-concepts/1-intro.md) are in charge of fetching data from various sources for your game, including data for the Primitive Catalogue which are stored as part of `ChainData`.. Depending on where the data you want to access comes from, you may have to add an extra funnel to your game.
Paima [funnels](../3-funnel-types/1-common-concepts/1-intro.md) are in charge of fetching data from various sources for your game, including data for the Primitive Catalogue which are stored as part of `ChainData`. Depending on where the data you want to access comes from, you may have to add an extra funnel to your game.
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@ interface ChainFunnel {
readData: (blockHeight: number) => Promise<ChainData[]>;
readPresyncData: (
args: ReadPresyncDataFrom
) => Promise<{ [network: number]: PresyncChainData[] | 'finished' }>;
) => Promise<{ [caip2: string]: PresyncChainData[] | 'finished' }>;
getDbTx(): PoolClient;
}

type ReadPresyncDataFrom = {
network: Network;
caip2: string;
from: number;
to: number;
}[];
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ Then, when registering timers or events under to trigger under this precompile,
```ts
createScheduledData(
`tick|${input.n + 1}`,
someBlockHeight,
{ blockHeight: someBlockHeight },
// highlight-next-line
PrecompileNames.GameTick
{ precompile: precompiles[PrecompileNames.GameTick] }
)
```

Expand Down
60 changes: 35 additions & 25 deletions docs/home/100-state-machine/325-creating-events/50-timers-ticks.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ There are three common usages of timers in Paima
## 1. Durations

There are two functions for scheduling events
- `createScheduledData(inputData: string, blockHeight: number, precompileName: string ): SQLUpdate`
- `deleteScheduledData(inputData: string, blockHeight: number | null): SQLUpdate`
- `createScheduledData(inputData: string, { target: blockHeight: number }, { precompile: string }): SQLUpdate`
- `deleteScheduledData(inputData: string, { blockHeight: number | null }): SQLUpdate`

These can be used to schedule an event that happens in 5 minutes (ex: a potion whose status wears off eventually).

The `precompileName` argument in `createScheduledData` needs to be one of the keys of the object defined through [paima precompiles](../325-creating-events/300-precompiles/100-introduction.md). The associated precompile address will be used as the `userAddress` when the event is triggered.
The `precompile` argument in `createScheduledData` needs to be one of the values of the object defined through [paima precompiles](../325-creating-events/300-precompiles/100-introduction.md). The associated precompile address will be used as the `userAddress` when the event is triggered.

:::tip
Scheduled data works off of `blockHeight` and not timestamps. You can learn more about the technical challenges that lead to this design, as well as ways to mitigate this in the [emulated block docs](../300-react-to-events/3-funnel-types/400-stable-tick-rate-funnel.mdx).
Expand Down Expand Up @@ -68,23 +68,32 @@ Create `db/migrations/1.sql` and add an input to execute the first schedule.
For example, imagine we created a precompile called `reset-leaderboard`

```SQL wordWrap=true
WITH new_ticks AS (
INSERT INTO scheduled_data (block_height, input_data )
VALUES (
-- get the latest block + 1
coalesce((
SELECT block_height
FROM block_heights
ORDER BY block_height DESC
LIMIT 1
), 0) + 2,
'tick|0'
WITH
new_tick AS (
INSERT INTO rollup_inputs (from_address, input_data )
VALUES (
your_precompile_address_hash,
your_data_here
)
RETURNING id AS new_tick_id
),
future_block AS (
INSERT INTO rollup_input_future_block (id, future_block_height )
VALUES (
(SELECT new_tick_id FROM new_tick),
-- get the latest block + 1
coalesce((
SELECT block_height
FROM paima_blocks
ORDER BY block_height DESC
LIMIT 1
), 0) + 2
)
)
RETURNING id
)
INSERT INTO scheduled_data_precompile (id, precompile)
SELECT id, 'reset-leaderboard'
FROM new_ticks
INSERT INTO rollup_input_origin (id, primitive_name, caip2, tx_hash)
SELECT new_tick_id, NULL, NULL, NULL
FROM new_tick

```

*NOTE*: You can replace the value for the `block_height` if you need to run this at a specific time
Expand Down Expand Up @@ -124,17 +133,17 @@ export interface ScheduleHourlyInput {
Capture the input in the STF and process it (Generally in `state-transition/src/stf/v1/index.ts`)

```ts
import type { type SubmittedChainData } from '@paima/sdk/utils';
import type { SubmittedChainData } from '@paima/sdk/chain-types';
import type Prando from '@paima/sdk/prando';
import type { Pool } from 'pg';
import type { BlockHeader } from '@paima/sdk/utils';
import type { PreExecutionBlockHeader } from '@paima/sdk/chain-types';

export default async function (
inputData: STFSubmittedData,
blockHeader: BlockHeader,
blockHeader: PreExecutionBlockHeader,
randomnessGenerator: Prando,
dbConn: Pool
): Promise<SQLUpdate[]> {
): Promise<{ stateTransitions: SQLUpdate[]; events: Events }> {

const input = parse(inputData.inputData);

Expand All @@ -155,11 +164,12 @@ export default async function (
// highlight-start
commands.push(createScheduledData(
`hour|${input.tick + 1}`,
blockHeader.blockHeight + hourBlocks
{ blockHeight: blockHeader.blockHeight + hourBlocks }
{ precompile: your_precompile_here }
));
// highlight-end

return commands;
return { stateTransitions: commands, events: [] };
}
}
...
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -98,11 +98,11 @@ As user wallet might change over time, as they can delegate, migrate and cancel
```js
// main entry point for your game's state machine
export default async function (
inputData: SubmittedChainData,
blockHeight: number,
inputData: STFSubmittedData,
blockHeader: PreExecutionBlockHeader,
randomnessGenerator: Prando,
dbConn: Pool
): Promise<SQLUpdate[]> {
): Promise<{ stateTransitions: SQLUpdate[]; events: Events }>
// highlight-start
/* use this user to identify the player instead of userAddress or realAddress */
const user = String(inputData.userId);
Expand Down

0 comments on commit 5f63845

Please sign in to comment.