Skip to content

Commit

Permalink
chore: updated README from feedback
Browse files Browse the repository at this point in the history
  • Loading branch information
JoblersTune committed Aug 16, 2024
1 parent 720a702 commit 38c82d1
Showing 1 changed file with 10 additions and 6 deletions.
16 changes: 10 additions & 6 deletions packages/backend/src/payment-method/ilp/connector/core/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ The first decision in collecting this data was whether to do so on the sender's

We also considered collecting metrics on both the sending and receiving sides, capturing metrics when prepare packets were received by the receiver and when fulfill or reject responses were received by the sender. However, this could lead to unreliable metrics, as telemetry is optional and might not be enabled on all nodes.

### Why We Chose handleIlpData
### Why We Chose to collect metrics before and after the sendering node's middlware routes

Given these considerations, we decided to place our packet count and amount metrics in the `handleIlpData` function within the Rafiki Connector Core. This function plays a crucial role in processing ILP packets, handling both outgoing payments and quotes.
Given these considerations, we decided to place our packet count and amount metrics within the Rafiki Connector Core. The core plays a crucial role in processing ILP packets, handling both outgoing payments and quotes.

The specific metrics we collect here include:

Expand All @@ -23,20 +23,24 @@ The specific metrics we collect here include:
- packet_count_reject: Counts the reject packets received.
- packet_amount_fulfill: Records the amount sent in fulfill packets.

These metrics provide valuable insights, including potential packet loss.
These metrics provide valuable insights, including our success and error rates when sending packets over the network.

### Challenges with the Current Setup

While `handleIlpData` is an effective location for telemetry collection, it has some limitations:

Sender-Side Limitation: Currently, metrics are only collected on the sender side. This is adequate for now but does not capture data from connecting nodes, which we plan to address in future implementations when multi-hop support is added.

In a scenario where a sender (node A) wants to send money to the receiver (node C), and this transaction takes place via an intermediray (node B).

sender (node A) --> connector (node B) --> receiver (node C)

With the cuirrent metric collection location, we only collect packet count and amount infomation from the sender (at node A) and we miss the packets and amounts forwarded by the connecting node (node B). Ideally we'd like to collect information from the sender and the connector nodes once multihop routing has been implemented.

### Other Considered Locations for Metrics Collection

We explored several alternative locations for collecting telemetry metrics within the Rafiki Connector Core:

- Dedicated Middleware: Initially, we implemented a dedicated telemetry middleware. However, this approach resulted in data being duplicated on both the sender and receiver sides, leading to inaccurate metrics. To address this, we would need to filter the data to ensure metrics are only collected on the sender side. Additionally, the telemetry middleware would need to effectively handle errors thrown in the middleware chain. To achieve this, it might be necessary to place the telemetry middleware right before the error-handling middleware, allowing it to catch and reflect any errors that occur. This would involve collecting the prepare packet count, wrapping the `next()` function in a try-catch block to capture any new errors, and then collecting reply and amount metrics after `next()` has resolved.
- `ilpHandler` on the Receiving Side: We also considered adding telemetry to the `ilpHandler` function, which processes middleware on receiving and connecting nodes. However, this would lead to a fragmented metric collection, with some metrics gathered on the receiver side and others on the sender side. This fragmentation could complicate the handling of concepts like transaction count, which would require dual collection on both sender and receiver sides. We'd also have to watch for the possibility of data duplication on the connectors because their middleware might trigger in each direction, as they receive prepares and again as they receive responses.
- Around the ILP packet middleware on the Receiving Side: We also considered adding telemetry around the middleware on receiving and connecting nodes. However, this would lead to a fragmented metric collection, with some metrics gathered on the receiver side and others on the sender side. This fragmentation could complicate the handling of concepts like transaction count, which would require dual collection on both sender and receiver sides. We'd also have to watch for the possibility of data duplication on the connectors because their middleware might trigger in each direction, as they receive prepares and again as they receive responses.

### Moving Forward

Expand Down

0 comments on commit 38c82d1

Please sign in to comment.