-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delegated Opt In Review #1
base: review
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a quick pass. I need to get further up to speed before any substantive comments.
README.md
Outdated
|
||
### Opt Ins | ||
|
||
`openOptIn(pay,axfer)void` and `addressOptIn(pay,axfer)void` are implementations of the [ARCX](https://github.com/algorandfoundation/ARCs/pull/229) interfaces. They both verify the MBR payment is sent to the account opting in and that it covers the ASA minimum balance requirement, which is stored in global storage. It also verifies the value of `global LatestTimestamp` is less than the set end time for the account opting in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By aware, though I don't think it matters, that global LatestTimestamp
is for the last block.
So if the last block was at 9998, and the end time is 10000, but the next block gets added at 10002, then you can have an opt-in the occurs after the set endtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's why I specified the usage of global LatestTimestamp
. The two options are either Round
or LatestTimestamp
. I feel like the timestamp approach allows for a better UX even though there may be some slight deviations in the true end time/block.
Co-authored-by: John Jannotti <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really like the idea of using boxes for "storage" in the sense of "this is something to be grabbed and used by an off-chain program" as is done with logicsigs here. Just storing the hash of the logicsig should be sufficient in combination with off-chain storage.
Alternatively, I suppose you could store just the signature in the boxes, since the bytes are deterministic (right? I think so) and wallets could reconstruct them. I suppose that's placing a bit more burden on them.
@@ -0,0 +1,84 @@ | |||
## Abstract | |||
This ARC contains is an implementation of [ARCX](https://github.com/algorandfoundation/ARCs/pull/229). The goal is to provide users a way to delegate asset opt-ins while having a high degree of control compared to a standalone logic signature without an application. The expectation is that a single instance of this ARC will be deployed on Algorand networks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see why this is a nice, useful app to use in combination with the ARC-50, but I don't see why it should be considered "special". The point of ARC-50 was to give users control - let them have an app that they trust to check opt-ins for them. This is one of them, but if it's the only one, then the flexibility that ARC-50 promises is reduced to the flexibility that this particular implementation provides.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah the idea isn't that this is the only ARC50 implementation, but it's a standardized one that every wallet/dApp in the ecosystem can integrate. Other people can make whatever implementations they want, but the more implementations there are the more fragmented the ecosystem can get.
README.md
Outdated
|
||
### End Times | ||
|
||
`setOpenOptInEndTime(uint64)void` and `setAddressOptInEndTime(uint64,address)void` are methods for setting the end time of a signature for open opt-ins and address opt-ins, respectively. An endtime signifies when the delegated logic signature will no longer work. The time corresponds to the epoch time (seconds) returned by `global LatestTimestamp`. End times can be updated at any time to any value. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does setting the endtime to 0 delete the box? Does the MBR get recovered somehow?
Actually, I'm assuming that the non-existence of an endtime means there is no endtime? So perhaps setting the endtime to MaxUint64 should delete the box and recover the MBR.
I suppose that really I'm asking for a way to delete all of the storage and have it mean "I want out of this whole business, give me my MBR back".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now there is no MBR recovery mechanism. I figured it added extra complexity for such a small amount of ALGO it wasn't necessarily worth it. Open to the idea of MBR recovery though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm doubling down on my dislike of storing logicsigs on chain without a deletion mechanism. If I say I want to allow someone to opt me in, I should just be able to remove that.
I suppose I'm really confused by the specific sender mechanism. What's the use case that makes it worth enshrining this idea? Why not just "allow all, deny all"? (and by that I mean let's also scrap endtime. just allow revocation.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, now I see. You are just storing the signature. That seems very reasonable.
But it also means that using one box for the signature and the endtime is easy. They are both fixed size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming we still want to maintain the current design goals, we actually couldn't make it one box. The signatures are stored per the resepective public key, but the end times are stored per address. This means one can have multiple accounts rekeyed to a single auth account and have different rules set for each one.
For example, I could have a single hardware wallet that is the auth addr for a "DeFi Account" and an "NFT Account". The "DeFi Account" delegates opt-ins to DeFi apps and "NFT Account" delegates opt-ins to NFT marketplaces, but not the other way around. This can currently be done because I can set the end times to be unique for each account.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should mention the downside of this approach is it makes fresh rekeyed accounts open to any opt-ins before setting the end times, but in general I think this is reasonable tradeoff for having per-account control. Open to other opinions though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The signatures are stored per the resepective public key, but the end times are stored per address. This means one can have multiple accounts rekeyed to a single auth account and have different rules set for each one.
For example, I could have a single hardware wallet that is the auth addr for a "DeFi Account" and an "NFT Account". The "DeFi Account" delegates opt-ins to DeFi apps and "NFT Account" delegates opt-ins to NFT marketplaces, but not the other way around. This can currently be done because I can set the end times to be unique for each account.
I think it's probably a mistake to have this separation. Today, rekeying two accounts to the same key means that they are under the exact same authority. It seems like a bad idea to introduce a mechanism that changes that model. If this becomes popular, we then have two ways of thinking about rekeying, and users must understand both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah true that's a really good point. Removing this means we can just use the existence of the signature boxes to signify whether delegated opt ins are permitted or not. We can also remove the time component and if people really want an automated disable, that can be done at a higher level.
setSigVerificationAddress: | ||
proto 1 0 | ||
|
||
// ./contracts/delegated_optin_app.algo.ts:53 | ||
// assert(this.txn.sender === this.app.creator) | ||
txn Sender | ||
txna Applications 0 | ||
app_params_get AppCreator | ||
assert | ||
== | ||
assert | ||
|
||
// ./contracts/delegated_optin_app.algo.ts:54 | ||
// assert(!this.sigVerificationAddress.exists()) | ||
txna Applications 0 | ||
byte "sigVerificationAddress" | ||
app_global_get_ex | ||
swap | ||
pop | ||
! | ||
assert | ||
|
||
// ./contracts/delegated_optin_app.algo.ts:55 | ||
// this.sigVerificationAddress.set(lsig) | ||
byte "sigVerificationAddress" | ||
frame_dig -1 // lsig: address | ||
app_global_put | ||
retsub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could the verify address be hardcoded at creation time and avoid a lot of audit surface?
(I separately want to have a discussion about why we're verifying them at all, but I'll a better place to start that conversation.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately not because the verify program requires the programs of the opt-in lsigs, which require the app ID of this application. Although I suppose the deployment process could be to deploy an int 1
approval program and then update the program after the fact with the hardcoded address.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, because it doesn't insert the app id at runtime? You could make it do that. It'll be invoked in combo with this app, so it could grab the app id, insert it into a template, and verify.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What "runtime" are you referring to? The lsig needs to have the app ID hardcoded otherwise the user will let ANY app approve opt-ins, which is not desirable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now, verifier_lsig
is confirming that it's looking at a signature against TMPL_OPEN_OPTIN_DATA
which I guess is the lsig bytecode, right? You could replace some bytes in TMPL_OPEN_OPTIN_DATA
with the app id before calling ed25519verify_bare
Actually, I'm totally confused, it seems. What the heck is TMPL_OPEN_OPTIN_DATA
and TMPL_ADDRESS_OPTIN_DATA
. I thought it was bytecode, but I no longer think it is.
It's possible I have seriously misunderstood how this thing works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No you're right. It's the data that's signed when a user signs an lsig, which is the lsig tag ("Program") + lsig bytecode.
If the verifier program needs to know the app ID, then we're back to square one where the verifier_lsig
program depends on the app ID thus the verifier_lsig
needs to be created after the app.
// assert(verifier.sender === this.sigVerificationAddress.get()) | ||
frame_dig -3 // verifier: txn | ||
gtxns Sender | ||
byte "sigVerificationAddress" | ||
app_global_get | ||
== | ||
assert |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be as good a place as any to ask this: Why go through the trouble of verifying these lsigs? It's a lot of complexity, and as far as I can tell, doesn't change any basic assumptions. If the caller wants to make their sig unusable, so what? The worst they can do is cause a transaction to fail, which costs callers nothing.
I haven't looked at that code yet, but it presumably does a pretty opaque thing where it reconstructs bytecode by concatenation and does a big hash on it? That's a pretty big audit surface to justify.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation allows anyone to upload the signature for a given address provided the signature is valid. If the app doesn't verify the lsigs, then it would have to implement logic that only allows the signer to put their signature in box storage. This would be counter-productive to one of the goals of reducing the onboarding friction of 0-ALGO accounts with an ASA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, wait, maybe I should have looked before I spoke. Is it verifying the code, or just the signature on it?
In the 0-algo case, who is generating the signed lsig?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I see you're only putting the signature itself on-chain, ignore most of this.
But walk me through the 0-algo case. I don't get that. If someone is creating accounts, and therefore has keys, they could fund the account and perform the set
transaction. Why are they willing and able to fund this thing, but not the account?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, I think they don't even have to fund the account. If they have keys, they could make the transaction to set
the sig, by calling this app, without even funding the account, just creating the app call transaction and putting it into a group with their own account which pays the fees.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In most scenarios (user interacting with an application), it's much easier for the app to programmatically set the sig box rather than having the user manually do it.
If the user has to manually sign the lsig in the first place, I don't see why also having them sign and send a txn makes much of a difference.
And about the 0 algo case, it seems like users (those who want to opt-in others accounts) need to be aware of whether that account has 0 algos or not, since the MBR increase will be higher if they have to satisfy the base account MBR in addition to the asset MBR. This seems very annoying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the app can do that, it must have the user's keys, right? If so, it can set the signature by sending a transaction on the user's behalf. If it doesn't have the keys, how did it generate the signature it is supplying to the call?
It's hard for me to see how the signature can be generated (by the app), but a transaction can't be sent (by the app).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The signature is still being created by the user. The difference is that the user uploading their own signatures means more clicks for them. They have to:
- Create their account
- Sign the lsig
- Share their address and signature
- Wait for the app to respond with an atomic transaction that uses fee pooling from an account the app custodies
- Sign the transaction
This is an obviously worse experience for the user because they have to sign the transaction and it's more complex for the app because they have to form the atomic transaction rather than simply calling the app with the signature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not obvious to me that's worse because I don't know which of these operations are supported in a natural way with any existing wallets. Do any wallets support signing lsigs? or arbitrary hunks of data, supplied by a dapp (<- I hope not!)? If not, doesn't that kill both approaches?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No they don't currently, but ARC47 will enable wallets to sign known lsigs. A wallet could also abstract away the signing of the open opt-in delegation lsig and just have it be a radio button during account creation.
// if0_condition | ||
// ./contracts/delegated_optin_app.algo.ts:117 | ||
// this.openOptInEndTimes.exists(optIn.assetReceiver) | ||
byte "e-" | ||
frame_dig -2 // optIn: axfer | ||
gtxns AssetReceiver | ||
concat | ||
box_len | ||
swap | ||
pop | ||
bz if0_end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than check for existence with box_len
, and then use box_get
but through away the existence bit, why not just use box_get
and use that bit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TEAL is generated by TEALScript, which has seperate get
and exist
functions and no direct exposure of box_get
without popping one of the values. Compiler optimization can eventually be made to account for this sort of pattern but in the meantime the TEAL would need to be manually edited to optimize this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't the only place where this extra complexity is coming up. If this is going to be audited, they are going to have to audit the TEAL, as they are not going to be able to trust all of TEALScipt compilation without auditing it.
There are a lot of places where we're creating a lot more audit work by auditing compiler output. (things like confirming that each methods checks OnCompletion
instead of doing it in one place, the extra callsubs
and "trampolines" for each method, etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I question how much extra "audit work" an extra box_len
or callsub
really adds. I'm not super familiar with the auditing process, but I know we got a reasonable quote and timeline for ARC12 which was TEALScript-generated and more complex.
Regarding OnCompletion, it's been on the TODO list to have only one entry point per OnCompletion in the generated TEAL from TEALScript, so this is something that can be updated.
setOpenOptInSignature(sig: byte64, signer: AuthAddr, verifier: Txn): void { | ||
assert(verifier.sender === this.sigVerificationAddress.get()); | ||
|
||
this.openOptInSignatures.set(signer, sig); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Who pays for the box?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be anyone really. I don't see a reason why it'd need to be a specific account.
// If endTimes box exists, openOptIn that the opt in is before the end time | ||
if (this.openOptInEndTimes.exists(optIn.assetReceiver)) { | ||
assert(this.openOptInEndTimes.get(optIn.assetReceiver) > globals.latestTimestamp); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the benefit of supporting end times vs just allowing the account to immediately stop supporting the operation? E.g. if this check could instead be assert(this.openOptInSignatures.exists(optIn.assetReceiver))
.
Asking because:
- Our timestamps are not very accurate anyway
- It seems like boxes containing expired signatures/timestamps must continue to exist forever. If you instead make the signature box existence indicative of whether the account wants to receive opt-ins or not, they'll clean themselves up automatically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The existance of the signature is not used to determine functionality because I wanted control over functionality to be per-address, rather than per-auth-addr. Rationale explained here.
Since a separate box is needed anyways for this, I figured might as well use timestamps to reduce clicks in the event the user knows they only want to delegate for a specific amount of time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by per-address, rather than per-auth-addr? Are you talking about the addressOptIn
path? If so this suggestion applies just as well to that case:
assert(this.addressOptInSignatures.exists({ signer: optIn.assetReceiver, sender: this.txn.sender } as SignerAndSender))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's say I have three accounts: account A and account B which both are rekeyed to account C. If I sign the lsig with account C, then anyone can use that signature for account A, B or C. But... maybe I only want to delegate opt-ins for account C, and disable delegated opt-ins on account A and B.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see, you're saying that the key for openOptInSignatures
represents auth addresses, NOT accounts. So in your example, with accounts A and B rekeyed to C, A and B would both use the signature from openOptInSignatures[C]
. Meaning we can't use only the existence of boxes from openOptInSignatures
to indicate opt-in status, since it's a many-to-one mapping, and each account should be able to control that individually.
I have two alternatives suggestions then:
- Make each
openOptInSignatures
control exactly one account, not an auth address that can back multiple accounts. You'd need to augment the values here, since it's necessary to store the actual signing key for the signature, since it can now be different from the account's key (e.g. vale type could be a tuple of(bytes32, bytes64)
).- Pros:
- Boxes get cleaned up when they're no longer necessary
- Cons:
- Slightly larger box keys to hold info which most of the time won't be necessary (since I assume most accounts aren't rekeyed)
- Pros:
- Keep the existing convention of
openOptInSignatures
, but instead of havingopenOptInEndTimes = new BoxMap<Address, uint64>
control opt-in status, use something likeopenOptInEnabled = new BoxMap<Address, null>
(I don't know how you'd represent a zero-length box value in tealscript, but that's what I'm trying to say withnull
).openOptInEnabled
would control whether an opt-in is allowed for an account: if and only a box exists for that account, it's allowed.- Pros:
openOptInEnabled
boxes will get cleaned up when no longer needed.
- Cons:
openOptInSignatures
boxes won't get cleaned up when no longer needed
- Pros:
// assert(verifier.sender === this.sigVerificationAddress.get()) | ||
frame_dig -3 // verifier: txn | ||
gtxns Sender | ||
byte "sigVerificationAddress" | ||
app_global_get | ||
== | ||
assert |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In most scenarios (user interacting with an application), it's much easier for the app to programmatically set the sig box rather than having the user manually do it.
If the user has to manually sign the lsig in the first place, I don't see why also having them sign and send a txn makes much of a difference.
And about the 0 algo case, it seems like users (those who want to opt-in others accounts) need to be aware of whether that account has 0 algos or not, since the MBR increase will be higher if they have to satisfy the base account MBR in addition to the asset MBR. This seems very annoying.
hey @joe-p, we (myself, @jasonpaulos and @jannotti ) had a live chat about this. Would you be open to scoping down to the simplest use case: someone who has a funded account wants to allow anybody to opt them in to assets. This is likely also the most common use case. We could get it out there with the wallets and it would be immediately useful. We could then add the potentially more complex features like address-based permissions in the future, perhaps along with any updates based on feedback from this first main feature. In other words, the following changes:
The resulting design should be fairly simple and therefore easier to feel confident about, while maintaining most of the usefulness. In our discussions, most of the complexity comes from the additional requirements. Another way of thinking about this MVP: enabling wallets to add a toggle for "let anybody send me assets". |
The main rationales for having all of these features in this implementation are cost and complexity. For every contract we deploy, the foundation needs to pay for an audit. Paying for multiple audits is obviously going to be more expensive and we might not have the budget for that. This ultimately isn't my call so we'd need to have this conversation with Alessandro (and/or John Woods). The features in this contract alone don't add any complexity by coexisting. Separating them does nothing but ADD complexity. Yes this contract is more complex, but it's no more complex than the sum of its parts. If we had separate contracts deployed at separate times:
I'm open to changing the implementation but don't want to sacrifice address-specific opt-ins. I also would want some more rationale other than "it's more simple". Having per-address control seems like a valuable feature to me. We were also okay with proceeding with ARC12, which was certainly more complex than this. I suppose it's just not clear to me what makes something "too complex". |
@joe-p I am driving from the typical wisdom of agile development. The skateboard -> scooter -> motorcycle -> car method, which proffers that you should start with the simplest useful thing (the "MVP") and iterate from there. This allows you both to get value out there as quickly as possible, and learn from version 1 to inform version 2, ultimately exposing yourself to less risk and leading to a better long-term outcome. In terms of time to value, it will be easier for the ecosystem to adopt the simple open delegated opt-in: it only requires work from wallets, no involvement from dapps. It's also easier to explain and understand for everybody, making it more likely to get buy-in. In terms of risk: there is a lot of risk in this work, risk essentially on the level of a protocol change - this code will be used by everybody in the ecosystem. If there is a bug in the code or a problem in the design (something that we're not accounting for that could surface later), it will be a huge pain for all of us. And just anecdotally for risk, I was listening to JJ and Jason discuss the code and going "wow this is complicated/subtle" to each other and it made me nervous. To be clear: I do think it will probably be useful to have the address-based opt-in delegation. I am only offering my Product perspective on what I think is the best approach to get there. I would hope that an external audit of the simpler contract would be cheaper, but I know these things are not always logically priced. If not, then yes it depends on the opinion of the powers that be at the Foundation. In terms of our audit from Inc engineers, we are (more) happy to audit two separate approaches. I'm not going to die on this hill, and it's not my product. This is just my recommendation. |
I'm not totally against it, but I think we should really think about the pros/cons of simplifying the initial deployment. Pros
Cons
I think con 3 is something in paticular that we have to think about. Once wallets support templated logic signatures, I imagine apps will start pushing for a opt-in lsig. If this lsig does not use an application like this implementation, then users will have no way to undo the lsig without rekeying. There are also multi-chain implications that users might not be aware of. For example, signing an lsig for opting in on Algorand mainnet will also work on Voi. |
To force myself to understand my own claims, I removed all of the things we've talked about cutting:
This eliminates the verification teal script (54 lines) and the main approval script goes from 522 lines to 145. 83 of that 145 is the So the audit goes from two tealscripts of 576 lines, to one script of 61 lines. I believe there would be just 45 lines of meat:
|
Getting back to actually auditing for safety, I think we should aim for both of the following to be true, even though we can imagine "splitting" some of the checking responsibility. If we aim for assurance in both directions, we will feel very safe.
As I read the app, it does not have these checks. It allows the axfer to be a clawback, for example. Suppose a user, C opts into this facility and C is also the clawback address for some asset, A. I am a bad guy, E, who wants to use C's power to steal some A from random guy R. According to the app I can send a payment to E (me), followed by a clawback of A from R to E, followed by this app. The app only confirms that the AssetRecipient and the payment go to the same place. They do, ME! I recognize that the lsig protects us. It checks the AssetAmount == 0, and Sender == AssetReceiver, but I'd like to see that check in the app (and AssetSender == ZeroAddress - no clawbacks). Meanwhile, only the app checks that there's a pay that covers MBR. In part, that's because it uses the inner opt-in trick. But I'd like to see that check also appear in the lsig, and I'm very willing to add the necessary |
I'm still not convinced that this is worth removing... mainly for peer-to-peer transactions. In general, it increases the amount of clicks and potentials areas of failure for the initial asset on-boarding process. It also means the user being onboarded has to have an active internet connection and access to their keys when first receiving an asset. Making an account on your ledger and then later sharing a QR code is no longer viable if you don't carry the ledger with you. To do this programmatically, the user would have to sign yet another lsig, which would need to include fv/lv and a lease, which means wallets or applications would need to generate this at the time of the transaciton, which could eventually become stale.
I'm confused as to why this is necessary. Any meaningful transaction is going to include both the logic signature and the app, which is hard coded into the logic signature. Without the app you don't have a functioning logic signature and without the logic signature you are burning ALGO to clal the app which does nothing of value. It doesn't make any sense for anyone to use the app without the lsig. |
Can you go through exactly the entire scenario you'd like to support? I see these weird combinations like "I have ledger, that costs real money, but care a lot about security, so I don't carry it, but I have no algos, and I want a QR code..." I can't follow it. To be clear, I did eventually see why you were worried about an extra click previously. You felt a dapp would be sending the logicsig to the wallet for signing, and then the dapp would construct a txngroup to perform the setSig. So you didn't want that txngroup to need to be signed as well. But here, I think this stripped down version is good for exactly one thing - your wallet wants to offer a checkbox that says "I accept airdrops". You cannot check this box until you have 0.1 + epsilon algos. Once you have that, everything is done internally to the wallet - no dapp is presenting a logicsig to sign - there's just the single builtin logicsig that enables these opt-ins. Your wallet signs the logicsig and makes the txngroup for you all at once. (It would probably have to interact with a ledger twice, if that's how you are doing things, but it would do them as "together" as possible. I don't think we should be designing around the problem "ledgers are too hard to use to sign a transaction".). I have no idea what logicsig signing is going to be like with a ledger, however. Do they offer the ability to sign a blob? That problem exists for both designs. |
As I said, it is not necessary. One could be willing to analyze all the moving parts at once. But I think it's pretty easy to make it easier to know we've got it right by make each part independently safe. It's maybe 3 or 4 extra asserts on both sides. It makes it much easier to state the expectation from each part. For example, looking at your original PR, I had to ask myself "Why is the delegated logicsig checking the sender, can't the app do that?" I don't want to ask any such questions. I want to know that I'm safe if we've got the analysis right on either piece. |
Someone has a ledger and actively uses other chains. They hear there's an ALGO event nearby, so they create a wallet, leave the ledger at home, and go to the event with just their phone. Event in this case could be a crypto conference or something like a concert that uses NFT ticketing at the door. For some context, the way we currently onboard new users at events is print physical paper cards with a QR code containing mnemonics that is already seeded with some ALGO and opted into whatever relevant assets.
This is true for both designs though. The wallet just needs to generate the sig once upon account creation or from a click in the settings menu and then it's simply a matter of how it's communicated to other parties. Wallets can push the signature to something like DynamoDB, and then anyone can read that signature and use it. There is really no reason the signature needs to be stored on-chain at all. That data is never used on-chain. Box storage is simply the cheapest, low-maintenance and decentralized way to store signatures long-term.
Not sure if logic signature signing is supported, but I know arbitrary message signing is definitely not supported. |
So they are out of luck in both designs, right? They need to get the logicsig signed by this new account. But they are not experienced enough to have done that at home with their ledger. If they don't have their keys with them (no ledger) then they can't sign the logicsig OR send the txn to chain once they arrive at event.
If we're willing to do that, then they can send the algos from that card to their pre-existing account. But I still don't understand how they can do anything without pre-signing that logicsig, which they surely won't be experienced enough to do ahead of time.
So for this to work you're imagining Pera will build in some special bootstrapping mode where this user will sign the logicsig at home with their ledger, then bring the signed bytes in on a thumb drive or printed QR or something? It all seems way easier to make a temporary wallet at the event using Pera, get some algos at the event, and if they want, send the algos and the assets to their secure wallet when they get home. Ledger seems like a red herring.
So this is all a bit hypothetical with respect to ledgers, isn't it? |
I just updated the implementation based on what we discussed.
Key missing part is Also, the application currently does no verification of the other txns in the group. With the additional of the aforementioned global field, the app itself won't need to do any verification thus I feel like it makes more sense to just keep everything in the logic sig, but open to other opinions. |
assert | ||
|
||
// Verify assetCloseTo is not set | ||
txn AssetCloseTo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we're at it I'd also verify AssetSender is not set
Co-authored-by: Jason Paulos <[email protected]>
No description provided.