My review of Practical Schnorr Threshold Signatures Without the Algebraic Group Model
However, this is non-trival because we need to consider the PedPoP DKG and the signing protocol together. The rationale behind this is that PedPoP (like Pedersen DKG) lacks the ability to be simulated. Hence, it becomes crucial to examine the combination of the DKG and the signing protocol as a unified execution in order to thoroughly analyze its properties
Lacks the ability to be simulated againast what? I'm guessing an ideal trusted party who generates a perfectly uniform key and distibutes shamir shares. But do we know tha it can't be simulated against a fucntionality that allows biasing the keys?
(top of the page)
this share xi from the combined one x is solely feasible if the reduction possesses the secret keys belonging to all remaining signers, including those controlled by the adversary
Worth mentioning here that this is trivial if the key is "honest majority" n > 2(t-1)
since then there are at least t
honest parties who will receive shares during the protocol who can therefore reconstruct adversary's contribution.
Moreover, two honest signers Si and Sj may in general output two different joint public keys pk i̸ = pk j (in which case the adversary may forge under either public key). We believe that the latter should never happen in any meaningful and secure key generation protocol. Yet, our way of modeling ensures that it is the responsibility of the scheme to exclude (or otherwise handle) these cases of inconsistency between honest signers.
I didn't understand why there is this difference between what is modelled and what is meaningful. It also seems like in the paper protocol it is guaranteed.
3 : // Run KeyGen(i) with A controlling network connections
4 : // between signer Si and all signers Sj , j ∈ {1, . . . , n} \ {i},
5 : // and in a separate thread concurrently to other oracle calls
What does it mean to have a separate thread here. In my intuition, the game is simulating the honest parties in the protocol so this is just a back and forth between the adversary and the game. Hopefully there's no need for threads!
Moreover, signer Si verifies the shares received from the other signers by checking
Ok so we are in the setting where the adversary is able to abort the protocol and we are not trying to add acountability as to which parties are aversarial right?
If so I claim that we need not check each share individually. We can just aggergate the shares and aggregate the polynomial coefficients and just check the aggregate share is valid against the aggregate polynomial and abort otherwise (while still verifying PoPs).
Furthermore we don't need to include the public polynomial shares in the common string check for Eq
.
It might even be possible to have a single party do the public polynomial commitment aggregation and send it out to all other parties (while sill preserving the frist term and the PoPs for it). The parties would still receive disaggergated secret shares and just add them together themselves. This would be reduce computation to O(n)
for each party when verifying shartes (assuming there is some kind of coordinating party that doesn't receieve a share doing the aggregation).
What's with the i = min(HS)
? What's so special about the minimum honest signer?
- I think
$A_{\kappa,0} = X*$ should be explicity set (the comment on line 7 only implies it). - This long comment might be easier to understand the concept of a vandermode matrix is introduced in prelims or something.
Notation: I use additive rather than multiplictive notation to make the point clearer
I cannot convince myself that the values for gamma and delta are correct here. We are trying to evaluate
First I define function similar to
The paper's definition is:
note the relationship:
So in the paper
(note: the
I think this is wrong. The simulated polynomial is determined by
If we let
And so:
There are some other things I didn't quite follow in the reduction so maybe I've just misunderstood the approach somehow but it seems to me like there is something off here.
The paper presents important simplifications to both the theory and practice of FROST style threshold Schnorr signature schemes. The paper contributes a multi-signature key generationm protocol that uses Schnorr proofs of possestion without assuming an explicit proof-of-possession security model or without using the algebraic group model. Previous works had always to do this since protocols that instantiated these PoPs with Schnorr signatures had to deal with the fact that you can't efficiently extract the secret key (that they claim to posses) from an advesary controlling many corrupted parties using the typical forking lemma algorithm that works where they control a single party. On the signing side of things the paper simplifies the specification and construction of the 2-round Schnorr threshold signature. In particular (i) showing that shares and polynomial commitments can be generated and sent at the same time and (ii) simplifying what goes into the nonce binding hash which means it's compatible with "ROAST".
Although the paper cleans up quite a few things it explcitly leaves a question untouched which is how to separate the security analysis of key generation from that of signing. The paper tries to explain why key generation and signing must be analysed together but I couldn't fully grasp the explanation and it still remains and important question in my mind. Consider this comment in recent paper from Shoup1:
Indeed, our approach is much more like that of GS21, in that we try to avoid talking about distributed protocols at all, and rather, we examine the security of the ordinary, non-threshold Schnorr scheme in “enhanced” attack modes that correspond to attacks on various types of threshold signing protocols (which may use presignatures, for example) — the details of these threshold protocols do not matter that much, so long as they are designed in a reasonably modular way so as to satisfy certain natural security properties.
Here he claims that as long as the distributed key generation is designed in a "reasonably modular way" so as to satisfy "natual" security properies (which are not made explicit) then one can analyse signing protocol security completely indepent of key generation. I am quite confused by this since on the one hand we have this paper which thinks that there is no easy way to separate these two concerns and on the other we have Shoup's paper which implies that it's so easy it can be left as an excercise to the reader!
This inflexibility in the architecutre of the security proof here may mean more work later when we need to apply tweaks to the key generation protocol. For example, it is not always the case that in a t-of-n
threshold scheme you want to have the number of polynomial commitments contributed during the key generation phase (let's call it N
) be precisely n
. What if a trusted party distributes a key share (N=1
), is Olaf's signing protocol still secure? Of course it is but since the signing protocol is only proved secure for a particualr key genertation protocol we can't say for sure (although for this case a proof should be easy to construct).
What about if N=t
, that is if t
parties (where we can assume one to be honest) generate a key and distribute shares to the n
parties. I claim that this would be secure but it cannot be automatically assumed from the paper. Similarly there is an interesting case at N > n
where you have parities that do not receive a share contribute randomn polynomial commitments to the key generation. In the case of hardware wallets this can even protect funds even if all n
devices engaging in the key generation are corrupt if one of the N - n
devices was not in a model where the corrupted devices cannot communicate directly with the adversary.
Thanks for thoughts on separation of keygen and signing. That indeed cleared up a lot of things for me. I also hadn't seen Katz's recent paper.
Yes that makes sense. Maybe you could just reword the explanation a bit to state clearly that this is to avoid enforcing unnecessary structure on the protocol. It sounded like "All keygens have this important property but nevertheless we've decided to ensure this is left up to the protocol designers to guarantee this". Why you did that is clear to me now but not when I read it.
I do not have any suggestions about how to make this better unfortunately. I think when writing this comment I was still sulking about things not being simulated against an ideal functionality since if
KeyGen
were simulation secure then it could more easily be understood as a step in the procedure where you activate the ideal functionality (and then it would be the extra "thread" running the keygen). I understand better now after your explanation and thinking about it some more.Yes this is what I was thinking. For our project we have a coordinator with a lot of computational power and then small devices as our actual secret share holders.
I think it pretty much hinges on me not understanding this part of the simulation. Let me know if you guys update this part of it or if I can help explain where I'm getting stuck.
Follow up on one of my comments: do you think the parties doing keygen and signing can be disjoint or at not equal? For example a set of parties K generate key shares for a signing parties S where S != K. See https://github.com/jonasnick/bip-frost-dkg/pull/1/files#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5R234 for why I think it's important (sorry about the weird formatting!).
My intuition is yes because I think of the security of these two phases as being separate. I think this questions has practical implications for us because we'd like to use more devices to help generate the key than actually get shares.
Thanks again for helping me understand your paper.