Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

edits_feb5 #15

Merged
merged 11 commits into from
Feb 13, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
187 changes: 116 additions & 71 deletions auction-server/src/api/marketplace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -140,11 +140,11 @@ pub async fn submit_opportunity(
.as_secs() as UnixTimestamp,
chain_id: opportunity.chain_id.clone(),
permission_key: opportunity.permission_key.clone(),
contract: opportunity.contract,
calldata: opportunity.calldata,
value: opportunity.value,
repay_tokens,
receipt_tokens,
contract: opportunity.contract.clone(),
calldata: opportunity.calldata.clone(),
value: opportunity.value.clone(),
repay_tokens: repay_tokens.clone(),
receipt_tokens: receipt_tokens.clone(),
bidders: Default::default(),
};

Expand All @@ -156,13 +156,49 @@ pub async fn submit_opportunity(
.await
.map_err(|e| RestError::InvalidOpportunity(e.to_string()))?;

let mut opportunities_existing = Vec::new();

if store
.liquidation_store
.opportunities
.read()
.await
.contains_key(&opportunity.permission_key)
{
opportunities_existing =
store.liquidation_store.opportunities.read().await[&opportunity.permission_key].clone();
let opportunity_top = opportunities_existing[0].clone();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are you only checking with the first one and not all?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly because the number of checks could grow out of proportion. If there are ~2.5 price updates per second and an opportunity sticks around for a while, the size of that vector could grow and explode the number of checks here.

And I think for the most part the main check we want is to ensure that the same exact opportunity isn't submitted twice in a row. If protocol is using the most recent price update, that should not revert to a historical price update, so we should be fine as long as we check the top opportunity in the stack

// check if exact same opportunity exists already
if opportunity_top.chain_id == opportunity.chain_id
anihamde marked this conversation as resolved.
Show resolved Hide resolved
&& opportunity_top.contract == opportunity.contract
&& opportunity_top.calldata == opportunity.calldata
&& opportunity_top.value == opportunity.value
&& opportunity_top.repay_tokens == repay_tokens
&& opportunity_top.receipt_tokens == receipt_tokens
{
return Err(RestError::BadParameters(
"Duplicate opportunity submission".to_string(),
));
}
}

opportunities_existing.push(verified_opportunity.clone());
anihamde marked this conversation as resolved.
Show resolved Hide resolved

store
.liquidation_store
.opportunities
.write()
.await
.insert(opportunity.permission_key.clone(), verified_opportunity);
.insert(opportunity.permission_key.clone(), opportunities_existing);

tracing::info!(
anihamde marked this conversation as resolved.
Show resolved Hide resolved
"number of permission keys: {}",
store.liquidation_store.opportunities.read().await.len()
);
tracing::info!(
"number of opportunities for key: {}",
store.liquidation_store.opportunities.read().await[&opportunity.permission_key].len()
);
Ok(id.to_string())
}

Expand All @@ -174,36 +210,39 @@ pub async fn submit_opportunity(
pub async fn fetch_opportunities(
State(store): State<Arc<Store>>,
) -> Result<axum::Json<Vec<LiquidationOpportunityWithId>>, RestError> {
let opportunities: Vec<LiquidationOpportunityWithId> = store
let opportunity: Vec<LiquidationOpportunityWithId> = store
anihamde marked this conversation as resolved.
Show resolved Hide resolved
.liquidation_store
.opportunities
.read()
.await
.values()
.cloned()
.map(|opportunity| LiquidationOpportunityWithId {
opportunity_id: opportunity.id,
.map(|opportunities| LiquidationOpportunityWithId {
// only expose the most recent opportunity
opportunity_id: opportunities[0].id,
anihamde marked this conversation as resolved.
Show resolved Hide resolved
opportunity: LiquidationOpportunity {
permission_key: opportunity.permission_key,
chain_id: opportunity.chain_id,
contract: opportunity.contract,
calldata: opportunity.calldata,
value: opportunity.value,
repay_tokens: opportunity
permission_key: opportunities[0].permission_key.clone(),
chain_id: opportunities[0].chain_id.clone(),
contract: opportunities[0].contract,
calldata: opportunities[0].calldata.clone(),
value: opportunities[0].value,
repay_tokens: opportunities[0]
.repay_tokens
.clone()
.into_iter()
.map(TokenQty::from)
.collect(),
receipt_tokens: opportunity
receipt_tokens: opportunities[0]
.receipt_tokens
.clone()
.into_iter()
.map(TokenQty::from)
.collect(),
},
})
.collect();

Ok(opportunities.into())
Ok(opportunity.into())
}

#[derive(Serialize, Deserialize, ToSchema, Clone)]
Expand Down Expand Up @@ -242,7 +281,7 @@ pub async fn bid_opportunity(
State(store): State<Arc<Store>>,
Json(opportunity_bid): Json<OpportunityBid>,
) -> Result<String, RestError> {
let liquidation = store
let opportunities_liquidation = store
.liquidation_store
.opportunities
.read()
Expand All @@ -251,65 +290,71 @@ pub async fn bid_opportunity(
.ok_or(RestError::OpportunityNotFound)?
.clone();

let position_id = opportunities_liquidation
.iter()
.position(|o| o.id == opportunity_bid.opportunity_id);
anihamde marked this conversation as resolved.
Show resolved Hide resolved

if liquidation.id != opportunity_bid.opportunity_id {
return Err(RestError::BadParameters(
"Invalid opportunity_id".to_string(),
));
}
match position_id {
Some(index) => {
let liquidation = opportunities_liquidation[index].clone();

// TODO: move this logic to searcher side
if liquidation.bidders.contains(&opportunity_bid.liquidator) {
return Err(RestError::BadParameters(
"Liquidator already bid on this opportunity".to_string(),
));
}
// TODO: move this logic to searcher side
if liquidation.bidders.contains(&opportunity_bid.liquidator) {
return Err(RestError::BadParameters(
"Liquidator already bid on this opportunity".to_string(),
));
}

let chain_store = store
.chains
.get(&liquidation.chain_id)
.ok_or(RestError::InvalidChainId)?;
let chain_store = store
.chains
.get(&liquidation.chain_id)
.ok_or(RestError::InvalidChainId)?;

let per_calldata = make_liquidator_calldata(
liquidation.clone(),
opportunity_bid.clone(),
chain_store.provider.clone(),
chain_store.config.adapter_contract,
)
.await
.map_err(|e| RestError::BadParameters(e.to_string()))?;
match handle_bid(
store.clone(),
crate::api::rest::Bid {
permission_key: liquidation.permission_key.clone(),
chain_id: liquidation.chain_id.clone(),
contract: chain_store.config.adapter_contract,
calldata: per_calldata,
amount: opportunity_bid.amount,
},
)
.await
{
Ok(_) => {
let mut write_guard = store.liquidation_store.opportunities.write().await;
let liquidation = write_guard.get_mut(&opportunity_bid.permission_key);
if let Some(liquidation) = liquidation {
liquidation.bidders.insert(opportunity_bid.liquidator);
}
Ok("OK".to_string())
}
Err(e) => match e {
RestError::SimulationError { result, reason } => {
let parsed = parse_revert_error(&result);
match parsed {
Some(decoded) => Err(RestError::BadParameters(decoded)),
None => {
tracing::info!("Could not parse revert reason: {}", reason);
Err(RestError::SimulationError { result, reason })
let per_calldata = make_liquidator_calldata(
liquidation.clone(),
opportunity_bid.clone(),
chain_store.provider.clone(),
chain_store.config.adapter_contract,
)
.await
.map_err(|e| RestError::BadParameters(e.to_string()))?;
match handle_bid(
store.clone(),
crate::api::rest::Bid {
permission_key: liquidation.permission_key.clone(),
chain_id: liquidation.chain_id.clone(),
contract: chain_store.config.adapter_contract,
calldata: per_calldata,
amount: opportunity_bid.amount,
},
)
.await
{
Ok(_) => {
let mut write_guard = store.liquidation_store.opportunities.write().await;
let liquidation = write_guard.get_mut(&opportunity_bid.permission_key);
if let Some(liquidation) = liquidation {
liquidation[index]
.bidders
.insert(opportunity_bid.liquidator);
}
Ok("OK".to_string())
}
Err(e) => match e {
RestError::SimulationError { result, reason } => {
let parsed = parse_revert_error(&result);
match parsed {
Some(decoded) => Err(RestError::BadParameters(decoded)),
None => {
tracing::info!("Could not parse revert reason: {}", reason);
Err(RestError::SimulationError { result, reason })
}
}
}
_ => Err(e),
},
}
_ => Err(e),
},
}
None => Err(RestError::OpportunityNotFound),
}
}
4 changes: 3 additions & 1 deletion auction-server/src/liquidation_adapter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,9 @@ pub async fn run_verification_loop(store: Arc<Store>) {
while !SHOULD_EXIT.load(Ordering::Acquire) {
let all_opportunities = store.liquidation_store.opportunities.read().await.clone();
for (permission_key, opportunity) in all_opportunities.iter() {
match verify_with_store(opportunity.clone(), &store).await {
// just need to check the most recent opportunity, if that fails the rest should also be removed
// TODO: this is true for subsequent opportunities that only have updated price updates, but may not be true generally; we should think about how best to do this (one option is to just check every single saved opportunity and remove from the store one by one)
match verify_with_store(opportunity[0].clone(), &store).await {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see why we can't verify each and every one of them here? and remove with the same logic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly concern around efficiency, and for opportunities where the only change is newer price updates that still maintain undercollateralization, invalidity of opp with price_update_t should imply invalidity of opp with price_update_s for s <= t.

but in general when subsequent opportunities have other changes or it's not just using pyth price feeds as TokenVault is, there may be some other dependencies and behavior (e.g. if protocol wants to use pyth price feeds with a more strict latency requirement than enforced by Pyth where it fails if using a stale price). for now will make the change to verifying each one, and we can revert later if efficiency is an issue

Ok(_) => {}
Err(e) => {
store
Expand Down
2 changes: 1 addition & 1 deletion auction-server/src/state.rs
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ pub struct ChainStore {

#[derive(Default)]
pub struct LiquidationStore {
pub opportunities: RwLock<HashMap<PermissionKey, VerifiedLiquidationOpportunity>>,
pub opportunities: RwLock<HashMap<PermissionKey, Vec<VerifiedLiquidationOpportunity>>>,
}

pub struct Store {
Expand Down
Loading