-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: min slot bid svm #291
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -15,6 +15,7 @@ use { | |
client_error, | ||
rpc_response::{ | ||
Response, | ||
RpcResponseContext, | ||
RpcResult, | ||
}, | ||
}, | ||
|
@@ -166,12 +167,14 @@ impl Simulator { | |
Ok(result.value) | ||
} | ||
|
||
/// Fetches multiple accounts from the RPC in chunks | ||
/// There is no guarantee that all the accounts will be fetched with the same slot | ||
async fn get_multiple_accounts_chunked( | ||
&self, | ||
keys: &[Pubkey], | ||
) -> RpcResult<Vec<Option<Account>>> { | ||
let mut result = vec![]; | ||
let mut last_context = None; | ||
let mut context_with_min_slot: Option<RpcResponseContext> = None; | ||
const MAX_RPC_ACCOUNT_LIMIT: usize = 100; | ||
// Ensure at least one call is made, even if keys is empty | ||
let key_chunks = if keys.is_empty() { | ||
|
@@ -189,11 +192,15 @@ impl Simulator { | |
for chunk_result in chunk_results { | ||
let chunk_result = chunk_result?; | ||
result.extend(chunk_result.value); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. as a note this could lead to accounts drawn from different slots, there may be some issues with that with regard to mistaken simulation results? it's good to use the min slot below, but there may still be issues resulting from the different results having different context slots, e.g. dex router tx simulates successfully when accounts are all synced, but may fail simulation when they are drawn at different context slots There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. agreed, this can become problematic with AMM accounts. I added a comment |
||
last_context = Some(chunk_result.context); | ||
if context_with_min_slot.is_none() | ||
|| context_with_min_slot.as_ref().unwrap().slot > chunk_result.context.slot | ||
{ | ||
context_with_min_slot = Some(chunk_result.context); | ||
} | ||
} | ||
Ok(Response { | ||
value: result, | ||
context: last_context.unwrap(), // Safe because we ensured at least one call was made | ||
context: context_with_min_slot.unwrap(), // Safe because we ensured at least one call was made | ||
}) | ||
} | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -59,8 +59,10 @@ use { | |
U256, | ||
}, | ||
}, | ||
litesvm::types::FailedTransactionMetadata, | ||
solana_sdk::{ | ||
address_lookup_table::state::AddressLookupTable, | ||
clock::Slot, | ||
commitment_config::CommitmentConfig, | ||
compute_budget, | ||
instruction::CompiledInstruction, | ||
|
@@ -578,25 +580,57 @@ impl Service<Svm> { | |
} | ||
|
||
pub async fn simulate_bid(&self, bid: &entities::BidCreate<Svm>) -> Result<(), RestError> { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think calling this function recursively will make the code more readable (instead of having loop) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I also considered that but that was also a bit weird. Refactored a bit to make it more readable |
||
let response = self | ||
.config | ||
.chain_config | ||
.simulator | ||
.simulate_transaction(&bid.chain_data.transaction) | ||
.await; | ||
let result = response.map_err(|e| { | ||
tracing::error!("Error while simulating bid: {:?}", e); | ||
RestError::TemporarilyUnavailable | ||
})?; | ||
match result.value { | ||
Err(err) => { | ||
let msgs = err.meta.logs; | ||
Err(RestError::SimulationError { | ||
result: Default::default(), | ||
reason: msgs.join("\n"), | ||
}) | ||
const RETRY_LIMIT: usize = 5; | ||
const RETRY_DELAY: Duration = Duration::from_millis(100); | ||
let mut retry_count = 0; | ||
let bid_slot = bid.chain_data.slot.unwrap_or_default(); | ||
|
||
let should_retry = |result_slot: Slot, | ||
retry_count: usize, | ||
err: &FailedTransactionMetadata| | ||
-> bool { | ||
if result_slot < bid_slot && retry_count < RETRY_LIMIT { | ||
tracing::warn!( | ||
"Simulation failed with stale slot. Simulation slot: {}, Bid Slot: {}, Retry count: {}, Error: {:?}", | ||
result_slot, | ||
bid_slot, | ||
retry_count, | ||
err | ||
); | ||
true | ||
} else { | ||
false | ||
} | ||
Ok(_) => Ok(()), | ||
}; | ||
|
||
loop { | ||
let response = self | ||
.config | ||
.chain_config | ||
.simulator | ||
.simulate_transaction(&bid.chain_data.transaction) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i think the retry should happen within the fetching? before the simulator actually runs? otherwise it's a weird interface where we're asking for a minimum slot to simulate against and then ignoring that as long as it passes simulation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we can also add some checks to make sure searchers aren't submitting slots that are more than some reasonable number off of where the rpc currently stands There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I consider this some internal behaviour which is subject to change. Interface only adds the option for a searcher to guarantee successful simulation from the specified slot. How we verify this internally is not part of the interface. I classify that as a DoS attack and don't think it's a priority There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. in that case, maybe it's worth a comment baking that into the interface on the SDK side, or in the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. there is a comment already there. do you think we should explain more? |
||
.await; | ||
let result = response.map_err(|e| { | ||
tracing::error!("Error while simulating bid: {:?}", e); | ||
RestError::TemporarilyUnavailable | ||
})?; | ||
return match result.value { | ||
Err(err) => { | ||
if should_retry(result.context.slot, retry_count, &err) { | ||
tokio::time::sleep(RETRY_DELAY).await; | ||
retry_count += 1; | ||
continue; | ||
} | ||
let msgs = err.meta.logs; | ||
Err(RestError::SimulationError { | ||
result: Default::default(), | ||
reason: msgs.join("\n"), | ||
}) | ||
} | ||
// Not important to check if bid slot is less than simulation slot if simulation is successful | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this comment is a bit confusing, i'm not sure what you're trying to clarify There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. trying to clarify why we are not checking the bid slot against result.context.slot in all cases |
||
// since we want to fix incorrect verifications due to stale slot | ||
Ok(_) => Ok(()), | ||
}; | ||
} | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this is still weird, but i don't have a great solution for it atm. maybe highlight this as a TODO to think about downstream simulation issues
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the max rpc account limit is 100, should be fine though? max of 64 accounts per transaction, where did you get the max limit of 100 from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on the docs:
https://solana.com/docs/rpc/http/getmultipleaccounts
We try to fetch all the accounts needed for multiple transactions, so we might need chunking.