My review of Practical Schnorr Threshold Signatures Without the Algebraic Group Model
However, this is non-trival because we need to consider the PedPoP DKG and the signing protocol together. The rationale behind this is that PedPoP (like Pedersen DKG) lacks the ability to be simulated. Hence, it becomes crucial to examine the combination of the DKG and the signing protocol as a unified execution in order to thoroughly analyze its properties
Lacks the ability to be simulated againast what? I'm guessing an ideal trusted party who generates a perfectly uniform key and distibutes shamir shares. But do we know tha it can't be simulated against a fucntionality that allows biasing the keys?
(top of the page)
this share xi from the combined one x is solely feasible if the reduction possesses the secret keys belonging to all remaining signers, including those controlled by the adversary
Worth mentioning here that this is trivial if the key is "honest majority" n > 2(t-1)
since then there are at least t
honest parties who will receive shares during the protocol who can therefore reconstruct adversary's contribution.
Moreover, two honest signers Si and Sj may in general output two different joint public keys pk i̸ = pk j (in which case the adversary may forge under either public key). We believe that the latter should never happen in any meaningful and secure key generation protocol. Yet, our way of modeling ensures that it is the responsibility of the scheme to exclude (or otherwise handle) these cases of inconsistency between honest signers.
I didn't understand why there is this difference between what is modelled and what is meaningful. It also seems like in the paper protocol it is guaranteed.
3 : // Run KeyGen(i) with A controlling network connections
4 : // between signer Si and all signers Sj , j ∈ {1, . . . , n} \ {i},
5 : // and in a separate thread concurrently to other oracle calls
What does it mean to have a separate thread here. In my intuition, the game is simulating the honest parties in the protocol so this is just a back and forth between the adversary and the game. Hopefully there's no need for threads!
Moreover, signer Si verifies the shares received from the other signers by checking
Ok so we are in the setting where the adversary is able to abort the protocol and we are not trying to add acountability as to which parties are aversarial right?
If so I claim that we need not check each share individually. We can just aggergate the shares and aggregate the polynomial coefficients and just check the aggregate share is valid against the aggregate polynomial and abort otherwise (while still verifying PoPs).
Furthermore we don't need to include the public polynomial shares in the common string check for Eq
.
It might even be possible to have a single party do the public polynomial commitment aggregation and send it out to all other parties (while sill preserving the frist term and the PoPs for it). The parties would still receive disaggergated secret shares and just add them together themselves. This would be reduce computation to O(n)
for each party when verifying shartes (assuming there is some kind of coordinating party that doesn't receieve a share doing the aggregation).
What's with the i = min(HS)
? What's so special about the minimum honest signer?
- I think
$A_{\kappa,0} = X*$ should be explicity set (the comment on line 7 only implies it). - This long comment might be easier to understand the concept of a vandermode matrix is introduced in prelims or something.
Notation: I use additive rather than multiplictive notation to make the point clearer
I cannot convince myself that the values for gamma and delta are correct here. We are trying to evaluate
First I define function similar to
The paper's definition is:
note the relationship:
So in the paper
(note: the
I think this is wrong. The simulated polynomial is determined by
If we let
And so:
There are some other things I didn't quite follow in the reduction so maybe I've just misunderstood the approach somehow but it seems to me like there is something off here.
The paper presents important simplifications to both the theory and practice of FROST style threshold Schnorr signature schemes. The paper contributes a multi-signature key generationm protocol that uses Schnorr proofs of possestion without assuming an explicit proof-of-possession security model or without using the algebraic group model. Previous works had always to do this since protocols that instantiated these PoPs with Schnorr signatures had to deal with the fact that you can't efficiently extract the secret key (that they claim to posses) from an advesary controlling many corrupted parties using the typical forking lemma algorithm that works where they control a single party. On the signing side of things the paper simplifies the specification and construction of the 2-round Schnorr threshold signature. In particular (i) showing that shares and polynomial commitments can be generated and sent at the same time and (ii) simplifying what goes into the nonce binding hash which means it's compatible with "ROAST".
Although the paper cleans up quite a few things it explcitly leaves a question untouched which is how to separate the security analysis of key generation from that of signing. The paper tries to explain why key generation and signing must be analysed together but I couldn't fully grasp the explanation and it still remains and important question in my mind. Consider this comment in recent paper from Shoup1:
Indeed, our approach is much more like that of GS21, in that we try to avoid talking about distributed protocols at all, and rather, we examine the security of the ordinary, non-threshold Schnorr scheme in “enhanced” attack modes that correspond to attacks on various types of threshold signing protocols (which may use presignatures, for example) — the details of these threshold protocols do not matter that much, so long as they are designed in a reasonably modular way so as to satisfy certain natural security properties.
Here he claims that as long as the distributed key generation is designed in a "reasonably modular way" so as to satisfy "natual" security properies (which are not made explicit) then one can analyse signing protocol security completely indepent of key generation. I am quite confused by this since on the one hand we have this paper which thinks that there is no easy way to separate these two concerns and on the other we have Shoup's paper which implies that it's so easy it can be left as an excercise to the reader!
This inflexibility in the architecutre of the security proof here may mean more work later when we need to apply tweaks to the key generation protocol. For example, it is not always the case that in a t-of-n
threshold scheme you want to have the number of polynomial commitments contributed during the key generation phase (let's call it N
) be precisely n
. What if a trusted party distributes a key share (N=1
), is Olaf's signing protocol still secure? Of course it is but since the signing protocol is only proved secure for a particualr key genertation protocol we can't say for sure (although for this case a proof should be easy to construct).
What about if N=t
, that is if t
parties (where we can assume one to be honest) generate a key and distribute shares to the n
parties. I claim that this would be secure but it cannot be automatically assumed from the paper. Similarly there is an interesting case at N > n
where you have parities that do not receive a share contribute randomn polynomial commitments to the key generation. In the case of hardware wallets this can even protect funds even if all n
devices engaging in the key generation are corrupt if one of the N - n
devices was not in a model where the corrupted devices cannot communicate directly with the adversary.
On Analyzing Key Generation and Signing Separately
Some key generation protocols ("DKGs") are fully simulation-based secure (e.g., in the UC framework). The most prominent example is "New-DKG" from GJKR [1], but all most "modern" (=recently proposed) DKGs fall into this category. Simulation-based security (or "simulatability") has the great advantage that we can combine a simulatable DKG with any (e.g., signing) protocol whose security analysis assumes a trusted setup, and security of the combination of the two protocols follows automatically. In other words, we can analyze the security key generation and signing separately.
The drawback of simulatability is that it is a very strong notion of security: Simpler and more efficient protocols, like Pedersen DKG (and its variant PedPoP), don't provide simulatability. In the concrete case of Pedersen/PedPoP, the reason is that the attacker can bias the final public key [1], e.g., the attacker has full control over the last bit of the public key and can force it to be 0. Now it turns out that this does not really help the attacker if all we want to do with the key setup is to create a Schnorr signature [1]. But this cannot be proven in a modular way. In particular, we cannot analyze the security of a signing protocol under the assumption that keys are setup as if they come from a trusted dealer,
because this assumption simply does not hold: in the case of a trusted dealer, the attacker cannot force the last bit to be 0.
So this is a trade-off, and what we wanted is a simple practical DKG. Moreover, we're not aware of any simulatable DKG in the literature that has been proven secure under dishonest majority. (And I can image this is fundamental, and you'll probably need to relax the DKG security in some way.) So we took PedPoP, and yes, this comes at cost of less modularity in the security proof, i.e., we need to analyze the combination of PedPoP+FROST3 as a monolithic thing.
When we wrote this, our world of the view was that there are just two classes of DKGs: DKGs that are fully simulatable and DKGs that are not. Now you bring up that maybe we could relax simulatability such that it explicitly allows the attacker to bias the key, but not more. This is a very interesting idea that has also been brought up by Jon Katz recently [2].
I believe this gives raise to interesting research questions:
Is PedPoP (or a similarly simple DKG) simulatable-with-bias (even under dishonest majority) And does this suffice to prove FROST unforgeable?
This is not trivial:
We could reach out to him and see if he's interested in this, but his focus may be on honest majority and robust DKGs. But even a simple chat could be fruitful.
[1] The best reference is their JoC version: https://link.springer.com/content/pdf/10.1007/s00145-006-0347-3.pdf
[2] see the very last sentence in https://eprint.iacr.org/2023/1094.pdf: "It is also possible to define variants of the key-generation functionality that ensure robustness, but are weaker than FKeyGen in that they allow the attacker to bias the public key. We leave a full consideration of such variants to future work." See also his talk at the NIST threshold workshop, in particular slide "DKG with Shift": https://csrc.nist.gov/csrc/media/Presentations/2023/mpts2023-day1-talk-dl-dkg/images-media/mpts2023-1b1-slides--katz--DKG-DL.pdf
(More to follow...)