// inside head tag
In the Nethermind Cryptography Research team, we have recently delved into analyzing the security of some SNARKs that are built on top of the FRI protocol. This emerging trend has seen numerous prominent projects utilizing some variation of the ethSTARK protocol or of Plonk compiled with FRI. However, an analysis of certain security aspects within these protocols has not yet taken place. We have addressed this in our work [BGK+23], where we analyze the Fiat-Shamir transformation of a wide class of PIOPs compiled with FRI. Our main result is stated TL;DR, these protocols are as secure as expected.
In a series of upcoming posts, we will present an intuitive overview of our work. The outline of our plan is as follows:
We outline some popular frameworks for constructing SNARG(K)s, including the approach where a Polynomial Interactive Oracle Proof (PIOP) is compiled with the FRI protocol and the Fiat-Shamir (FS) transformation.Subsequently, we explain the pitfalls and technical issues that arise from this approach, and state our main result The main two issues are:
We will discuss soundness notions that are well-behaved under the Fiat-Shamir transformation. Moreover, we will explain one of our main results, which describes a wide class of PIOPs that can be compiled safely with FRI and the FS transformation into a SNARK.
We will outline why some popular FRI-based SNARKs fall into the class of PIOPs mentioned above. To do so, we also show that the FRI protocol is secure under the FS transform.
Part of our work has been developed independently and concurrently by StarkWare. Precisely, very recently StarkWare updated its ethSTARK documentation showing that both FRI and ethSTARK are secure after the Fiat-Shamir transform [S23].
There is a nice framework that allows one to build a SNARG(K) from a nice type of interactive protocol. Such a protocol is called Polynomial Interactive Oracle Proof (PIOP) [CHM+20, BFS20]. In essence, in a PIOP, the prover’s messages are (oracles to) polynomials of degree below a certain prescribed bound. We refer to such polynomials as low degree polynomials. More precisely, the verifier is granted access to each of these polynomials in the form of an oracle, which is an ideal algorithm that takes as input a point in some prescribed set, and instantly returns the evaluation of the polynomial on that point. Notice that a PIOP is a particular instance of an Interactive Oracle Proof (IOP). In the latter, the prover’s messages are either constants or general functions, not necessarily low degree polynomials. In turn, IOPs (and PIOPs) are a generalization of Interactive Proofs (IP). These are IOPs where the prover is limited to sending constants (as opposed to functions) to the the verifier.
Both in PIOPs and in IOPs, the verifier sends messages as usual, namely, the verifier’s challenges are elements uniformly sampled in some (subset of a) field (or some other challenge space). At the end of the interactive phase of a PIOP or of a IOP, to decide whether to accept or reject the proof, the verifier evaluates the received functions on some points and uses these evaluations to reach a decision.
Importantly, a PIOP is considered secure if it is secure under the following assumption:
Assumption✳️: An adversary’s messages are always low degree polynomials.
Working with Assumption✳️ usually makes the security analysis of a PIOP a very tractable problem, since one can use all the nice properties that polynomials possess, such as the beloved-by-all Schwartz-Zippel lemma. On the other hand, the situation is more involved for IOPs, since the prover’s messages are not necessarily polynomials anymore, and thus the comfort of field algebra is replaced by wilder combinatorial problems.
Despite being such nice objects, PIOPs are just an idealized type of protocol that cannot be materialized per se in the “real world”. Several reasons contribute to this, one of which is that Assumption✳️ is not realistic. Without any further constraints, an adversary may send arbitrary functions and pretend they are low degree polynomials. That’s from the security side. On the efficiency front, PIOPs (and also IOPs in this case) are “unusable” because “sending functions” to the verifier is dramatically impractical. Additionally, it’s worth noting that both PIOPs and IOPs require interaction between the prover and the verifier, which is something we ultimately want to eliminate.
Next, we outline two step recipe that take care of these issues, transforming a PIOP Π into a Succinct Non-interactive Argument (SNARG), denote it SNARG(Π). To clarify terms, recall that succinctness refers to the fact that proofs and verifier times are sublinear; and argument refers to the fact that security is only considered for adversaries that run in polynomial time. Also, if a SNARG is knowledge sound, then it is a SNARK.
To make a PIOP (or a IOP) Π succinct, one can modify Π so that the honest prover, instead of sending low-degree polynomials to the verifier, sends commitments to these polynomials. To do so, the prover uses a Polynomial Commitment Scheme (PCS). Importantly, and roughly speaking, in a PCS it is required that only low degree polynomials can be committed, except with negligible probability.
Thus, introducing a PCS both constrains the prover to send low degree polynomials, and reduces proof sizes dramatically (since the commitments are usually small objects). Assuming that both the verification algorithms of Π and PCS are succinct, we obtain a Succinct Interactive Argument, which we denote PCS(Π). Moreover, one can show that if both Π and the PCS are “secure”, the resulting protocol is also secure [CHM+20, BFS20], producing a Succinct Interactive Argument of Knowledge.
The next consists in applying the Fiat-Shamir (FS) transformation to PCS(Π). The resulting protocol, denote it SNARG(Π), is a Succinct Non-interactive Argument. The FS transform is a method that allows to remove the interactiveness in a protocol by having the prover replace each verifier’s random challenge with the hash of the input and all the messages produced so far. The idea is that, by modelling the hash function as a Random Oracle, the resulting hashes are, effectively, uniformly sampled elements.
Now, perhaps surprisingly, the question of whether SNARG(Π) is secure is a delicate one. Indeed, it is possible that PCS(Π) is, for example, sound or knowledge sound, but SNARG(Π) is not. We will ellaborate on this aspect later, in the more general context of interactive protocols. For now, let us mention that, to ensure SNARG(Π) is secure after the FS transformation, one usually proves that PCS(Π) satisfies certain strengthened security properties, namely special soundness or round-by-round (RBR) knowledge soundness.
The previous recipe is followed by well known protocols like Plonk (and variations), Sonic, Marlin, vsql, etc. Among these, all of them use KZG-type schemes as their Polynomial Commitment Scheme.
As mentioned, for each of these protocols Π, one can prove that PCS(Π) is secure by showing that Π is secure under Assumption✳️, and by proving the employed PCS is secure. Finally, to prove that the final SNAR(K) is secure, one needs to make sure that PCS(Π) meets some of the aforementioned “strong” security properties that are well-behaved under the Fiat-Shamir transform.
A recent line of work has taken some of the aforementioned protocols and replaced the KZG commitment scheme with the so-called FRI protocol. There are several reasons for doing so:
This flexibility is especially useful when designing complex modular proof systems that leverage recursion. For example, very roughly, one could design the proof pipeline in a zk-rollup as follows:
With this scheme in mind, one could use FRI-based protocols with different parameter configurations to accomodate the requirements of each step of the proving pipeline. This is the approach taken by Plonky2. Some protocols go as far as to use a FRI-based protocol for Step 1, and a completely different proving scheme for Step 2, e.g., Groth16.
Interestingly, after replacing KZG with FRI, some of these protocols (e.g., Plonk) become very much like the protocols that are often referred to as “STARKs”, namely DEEP-ALI and ethSTARK. With this in mind, in our work, we introduce a family of IOPs that encapsulate and abstract away all these protocols.
In terms of security, the usage of FRI raises a couple of formal difficulties. Below, we state the difficulties and elaborate on them in the subsequent sections.
Moreover, there are a number of other protocols whose security had only been analyzed for their interactive version.
Let Π be a secure (i.e., round-by-round knowledge sound) PIOP of a certain form [2]. Then the IOP that results of compiling Π with batched FRI (possibly using a large proximity parameter δ) is also secure (RBR knowledge sound), with a small increase in the soundness error.
Moreover, applying the Fiat-Shamir transformation to the compiled protocol results in a knowledge sound protocol FS(FRI(Π)), i.e. a SNARK.
As a consequence, one can prove the security of FS(FRI(Π)) just by proving the security of Π under Assumption ✳️, i.e., in the interactive setting and assuming that adversaries always send low degree polynomials (as opposed to arbitrary functions).
This result applies for example to Plonky2, Risc Zero, and ethSTARK.
First things first, in the FRI protocol, the prover knows a function f(X):𝔽→ 𝔽 and the verifier has access to the evaluations of f(X) on a multiplicative subgroup H of 𝔽, where here 𝔽 denotes any finite field.
Then, FRI allows the prover to prove that f(X) is a polynomial of degree at most a publicly specified bound… Ugh oh, unfortunately this sentence is not entirely true, but let’s assume for a moment that it is. Later we will come back to the “real world”.
Before we move on, let us recommend this excellent post by StarkWare for an introduction to FRI. Another great more technical source is [H22].
We refer to the Appendix in this post for a formal definition of PCS. In an ideal world where FRI allows to prove that functions are low degree polynomials, FRI can be used to build a PCS as follows:
If the FRI protocol passes for the function q(X), then (in our ideal world), except with negligible probability, the function q(X) is a polynomial of degree ≤ d-1, and so f(X) = q(X)(X-v) +y is a polynomial of degree ≤ d, satisfying f(v)=y.
The problem is that, in truth, FRI applied on a map g(X) for a degree bound d does not guarantee that g(X) is a polynomial of degree at most d. Rather, it guarantees that there is at least one polynomial q(X) of degree ≤ d that is δ-close to g(X). Here δ-closeness means that there is a subset S⊆ H such that
and S is “reasonably large”.
What “large” means here is controlled by the so-called proximity parameter δ. One has 0<δ<1 and asks that the number of points in H where g(X) and q(X) disagree is at most δ |H|, more precisely, |H∖ S| ≤ δ |H|. The smaller the δ, the closer to a polynomial g(X) is. Then, as one would expect, the cost of running FRI (with a fixed level of targeted security) increases as δ diminishes.
This may already look like a deal-breaker; allowing the prover to commit to non-low degree polynomials is not what we wanted, since it breaks Assumption ✳️. However, perhaps unexpectedly, if δ is very small then the previously described “ideal world PCS” can still be used as a PCS [VP19].
The reason is that for a “small” δ [1], any function f(X) has at most one δ-close low-degree polynomial p(X). It is not hard to prove that, if the map
passes the FRI protocol, then, except with negligible probability, y=q(v). Thus, in this setting, the PCS can only be used to open the polynomial q(X), even if q(X) is not the initially committed map. This is good enough within the settings we’re concerned about.
The issue appears for “large” values of δ [1], which are the values commonly used due to efficiency reasons. In this case it is perfectly possible that there are more than one polynomial of degree at most d that are δ-close to g(X). As observed and formalized in the RedShift paper [KPV22], in this scenario the (non-)”PCS” binds the committed map f(X) to a list of polynomials q₁(X), …, qᵣ(X), so that, given v∈ 𝔽, the prover can successfully prove that f(v)=qᵢ(v) for any i of the prover’s choice. RedShift calls this primitiva a list Polynomial Commitment Scheme.
While this is enough for protocols that deal with “very narrow execution traces” (as proved in RedShift in the interactive setting), this scheme does not scale to wider traces. More precisely, the soundness proof yields an error which is exponential on the number of witness polynomials. We will ellaborate on this in Part 2 of the series.
Therefore, in general, it is currently not possible to stick to the PIOP + PCS recipe if one wants to use FRI “as a PCS”. Instead, one needs to step out of the PIOP and analyze the protocol under the IOP framework, which as we mentioned is much more unhospitable, since Assumption ✳️ can’t be used. This is the approach taken (not with the formalism here) by protocols such as DEEP-ALI and ethSTARK [S23].
These protocols use the so-called batched FRI protocol. There, instead of having a FRI invocation for each quotient map (f(X)- y)/(X-v), FRI is performed on a random linear combination of all the quotient maps. As we will explain in Part 2, this, which may simply look like “an efficiency hack”, has a dramatic impact on the (provable) soundness of the protocol.
Remark: While RedShift is arguably the first academic work combining Plonk and FRI, “related” protocols such as Plonky2 and Risc Zero use FRI in a similar way to how FRI is used in DEEP-ALI and ethSTARKs (which is different to RedShift’s).
In particular, the IOP security of these protocols does not follow from the RedShift paper (since they work with wide traces). Moreover, these protocols, along with DEEP-ALI and ethSTARK, were not known to be secure after the FS transform. Our work addresses these gaps.
We now informally state one of our results. The security notion we deal with is round-by-round knowledge soundness, which guarantees knowledges soundness under the Fiat-Shamir transform. In the next part of the series, we will explain this concept and state the result in a more formal manner.
As mentioned, in terms of security, applying the Fiat-Shamir transformation on an interactive proof is not straightforward, as this transformation can ruin the security of the initial protocol. Below, we describe two examples of this phenomenon, one well-known, and the other recently described in [AFK22].
Before we begin, let us introduce some further formalism around Interactive Proofs (IP). A relation R is a set of of pairs of the form (x,w). Here x is called an instance or a public input, and w is called a witness. An IP Π for a relation R is a tuple of algorithms Π=(P, V), the prover and the verifier, respectively. Both P and V take an instance x as common input. Additionally, P takes a witness w as input such that (x,w)∈ R.
Then, P and V engage in an interactive protocol during which, for μ rounds of communication, P sends a message mᵢ to V, and V replies with a random challenge cᵢ (i=1,…,μ). Finally, P sends one last message m_μ, and then V inspects the transcript (m₁, c₁, …, m_μ, c_μ, m_{μ+1}) and decides whether to accept or reject.
Suppose Π is an interactive proof with soundness error 1/2, meaning that, for any malicious prover P*, the chances that P* convinces the verifier after executing the protocol are at most 1/2.
Now assume that, to increase the security of our protocol, we create a new interactive proof Πₜ consisting in repeating Π for t times in a row. I.e. in Πₜ first the prover Pₜ and verifier Vₜ execute Π, then, if the verifier of Π would reject the proof, Vₜ rejects. Otherwise, they repeat this process for t-1 more times.
The resulting protocol, Πₜ, has soundness error 1/2ᵗ. This is because, to falsely convince Vₜ, a malicious prover for Πₜ has to convince Π’s verifier t times in a row.
Consider now the Fiat-Shamir transformation of Πₜ, which we denote FS(Πₜ). Ideally, we would like FS(Πₜ) to have roughly the same level of security as Πₜ, i.e. we’d like that, to break FS(Πₜ), an attacker P* is required to perform at least O(2ᵗ) work.
However, with just O(t) work [3], it is possible to maliciously construct a false proof with reasonably high probability. The intuition is the following: suppose P^ manages to (falsely) find a valid proof π₁ for the first instance of FS(Π) in FS(Πₜ). Suppose, however, that afterward P* attempts to find a valid proof π₂ for the 2nd instance of FS(Π), but that it does not succeed. In an interactive scenario, P* would need to start its malicious attempt from scratch, since the verifier would simply reject the proof. However, in the non-interactive scenario, P^ can instead keep the proof π₁, delete π₂, and attempt again to “break” the second instance of FS(Π) in FS(Πₜ).
This is called a rewinding attack and is the crux of the security behind the Fiat-Shamir transformation.
Inductively, if the prover has (dishonestly) generated valid proofs π₁, …, πᵤ for the first u<t instances of of FS(Π) in FS(Πₜ), then it can attempt to generate a valid proof π_{u+1} for the (u+1)-th instance. Even if it fails, it can keep the prior proofs π₁, …, πᵤ and attempt to generate a valid π_{u+1}, and so on until it has u+1 valid proofs. Assuming P* is able to break Π with probability 1/2, then just by performing, say, 10 attempts, P* has very high chances of ending up with the next valid proof π_{u+1}. Overall, with about 10t work [3], P* obtains a valid false proof for FS(Πₜ).
As mentioned, the following vulnerability was only described recently in [AFK22].
For the second example, let Π be an Interactive Proof with soundness error (1/2)^λ. To increase the soundness of Π, instead of repeating Π sequentially, we now repeat it in parallel. More precisely, we let Πᵗ be the IP consisting in a prover and a verifier executing Π for t times in parallel. Formally, given a public instance x, Πᵗs prover and verifier run t parallel instances of the prover and the verifier from Π, on the instance x. Then, the verifier of Πᵗ accepts if and only if each of the t verifier instances of Π accept.
Intuitively, it is clear that Πᵗ has soundness error 1/2^{tλ}. However, for a wide (and natural) class of IPs, called special unsound, it turns out that, again, FS(Πᵗ) [4] does not achieve this level of soundness.
This phenomenon is better explained with an example. To this end, we consider the following toy IP, inspired on ethSTARK and Plonk. We omit discussing some technical details, and the whys and hows of the protocol, because of this resemblance.
Below, 𝔽 is a finite field, H is a multiplicative subgroup of 𝔽, and Q₁(Y₁,Y₂), Q₂(Y₁,Y₂) are two fixed bivariate polynomials.
Claim: The prover claims it nows two low degree polynomials such that, for all x∈ H.
Round 1: The prover first sends a commitment (via a Polynomial Commitment Scheme) to f₁(X) and f₂(X). The verifier replies with a random field element c₁.
Round 2: The honest prover replies with a commitment to the polynomial
where Z_H(X) is the vanishing polynomial of H. Note that, if the prover is being honest, then Z_H(X) indeed divides Q₁(f₁(X), f₂(X)) + c₁Q₂(f₁(X), f₂(X)) as the latter polynomial vanishes on H. The verifier replies with another random field element c₂.
Round 3: The honest prover replies with f₁(c₂) and f₂(c₂). The honest prover and the verifier use the evaluation protocol from the Polynomial Commitment Scheme to certify that the evaluations f₁(c₂) and f₂(c₂) are correct (technically, this would add more rounds of communication to the protocol).
Verifier’s decision: The verifier accepts if and
only if the previous verifications passed, and
In our subsequent analysis we make Assumption ✳️ for simplicity, i.e. we assume that dishonest provers only commit to low degree polynomials f₁(X), f₂(X), q(X) (as opposed to arbitrary maps). Note that, if we see that there is a security issue for such prover, then there is a security issue in general.
Now suppose a malicious prover commits to low degree polynomials that do not satisfy (1) for some x∈ H. Then, Z_H(X) does not divide Q₁(f₁(X), f₂(X)) and/or it does not divide Q₂(f₁(X), f₂(X)). However, in principle it is possible that there exists c₁∈𝔽 such that Z_H(X) divides Q₁(f₁(X), f₂(X)) + c₁Q₂(f₁(X), f₂(X)) [5].
If the verifier were to send such c₁, then the malicious prover could continue with the protocol as if it was honest, and it would for sure convince the verifier at the end of it. Let’s call a challenge c₁ with this property a special challenge.
Now suppose the verifier does not send a special challenge. Then, the right-hand side of (2) is not a low-degree polynomial, and so
as polynomials, no matter what low-degree polynomial q(X) the prover commits to. However, there may still be a number of field elements c₂ such that (3) holds, namely, as many as the degree of q(X) Z_H(X) — Q₁(f₁(X), f₂(X)) + c₁Q₂(f₁(X), f₂(X)).
Similarly as before, if the verifier were to send one such c₂, the malicious prover could complete the protocol as if it was honest, and eventually falsely convince the verifier. Again, we call c₂ a special challenge.
This situation is captured by the following definition, which was introduced in [AFK22].
Special unsound Interactive Proof (informal): An interactive proof Π is special unsound if, at each round i=1,…, μ, there is a set of “special” verifier challengers Cᵢ such that, if the verifier sends a challenge from Cᵢ, then a malicious prover can complete the protocol as if it was honest, and convince the verifier at the end.
Next, we explain why, for a special unsound IP Π, we have that the FS transform of Πᵗ does not have the soundness one might expect. For simplicity we restrict to t=2.
To do so, we continue with our example toy Interactive Proof, and we show that FS(Π²) has roughly the same security as FS(Π). Indeed, let x be a public input and suppose a malicious prover P* selects as its first message for FS(Π²) a tuple (m₁₁,m₁₂) where [4]
for some low degree polynomials f₁(X), f₂(X) and g₁(X), g₂(X) such that either f₁(X), f₂(X) or g₁(X), g₂(X) do not satisfy (1) for all x∈ H.
Then P* computes challenges a challenge (c₁₁, c₁₂), where
Suppose P* is lucky enough that c₁₁ is a special challenge (but c₁₂ is not necessarily special). Then, from this point on, P* can select the messages of the first instance of Π in FS(Π²) as if it was honest, and the verifier will accept the final proof corresponding to this instance. As a consequence, from this point on, P^ needs only worry about “breaking” the second instance of Π. To do so, it can rewind the proof as many times as it needs, as long as it does not modify the initial partial transcript (x, (m₁₁, m₁₂), (c₁₁,c₁₂)). Thus, for example, P* may attempt multiple times to find a 2nd message (m₂₁, m₂₂) such that
is special. If it succeeds, then P* can complete the proof as if it was honest, no matter what the challenge c₂₁ is.
Summarizing, P* can attack FS(Π²) as follows:
Amount of work required: For the attack above to succeed, P* has to, essentially, “break” two instances of FS(Π), one during the first round, and the other during the second round. Given that Π has soundness error 1/2^λ, we can reasonably assume these tasks require both about 2^λ work. Hence, overall, P* has to make ≈ 2^λ + 2^λ = 2^{λ+1} work, to have a reasonable chance of breaking FS(Π²). This is dramatically lower than what one would expect, namely 2^{2λ}, given that the interactive protocol Π² has soundness error 1/2^{2λ}.
The work [DMW+23] found many protocol vulnerabilities related to using the so-called weak Fiat-Shamir transform (in which one does not include the statement as part of the random oracle’s inputs).
We think that using (the standard) FS on a protocol that has some parallelized components is also a likely pitfall. While we haven’t found any protocol with this vulnerability, we emphasize it here so it is easier for practitioners to avoid it.
In our work [BGK+23] we describe a subtle (and arguably more natural) variation of Plonky2 that does have the vulnerability. Let us remark that this variation is just hypothetical; as we show in our work, Plonky2 as described in our paper is secure.
The attack to the FS transform of the parallel repetition of an IP contradicts a somewhat common heuristic regarding the security of the FS transformation. Namely, this heuristic/intuition states that, as long as each round of an interactive protocol “has λ bits of security”, then the FS-transformed protocol also has, roughly, λ bits of security. In other words, the FS-transformation is dangerous only when the interactive protocol builds up to λ bits of security by “accumulating soundness across individually less secure rounds”. The latter is precisely what occurs in Example 1, and we have seen that, indeed, in that case the FS transformation has disastrous results.
However, Example 2 does not accommodate to this heuristic. Indeed, let Π₀ be an IP with μ rounds of communication, all whose rounds have λ₀ bits of security, and let Π₀ᵗ be the result of repeating Π₀ for t times in parallel. Then one could reasonably say that each round of Π₀ᵗ has, roughly speaking, λ := t λ₀ bits of security. However, the FS transform of Π₀ᵗ has about t λ/μ bits of security.
In the next two posts we will discuss different “Fiat-Shamir compatible” notions of soundness, including round-by-round (knowledge soundness). Then we will explain our main result in detail, and provide intuition on its proof. In particular, we will explain why FRI is round-by-round sound.
This article has been prepared for the general information and understanding of the readers, and it does not indicate Nethermind’s endorsement of any particular asset, project or team, nor guarantee its security. No representation or warranty, express or implied, is given by Nethermind as to the accuracy or completeness of the information or opinions contained in the above article. No third party should rely on this article in any way, including without limitation as financial, investment, tax, regulatory, legal or other advice, or interpret this article as any form of recommendation.
• [AFK22] Attema, T., Fehr, S., Klooß, M. (2022). Fiat-Shamir Transformation of Multi-round Interactive Proofs. In: Kiltz, E., Vaikuntanathan, V. (eds) Theory of Cryptography. TCC 2022. Lecture Notes in Computer Science, vol 13747. Springer, Cham.
• [BGK+23] Block, A. R., Garreta, A., Katz, J., Thaler, J., Tiwari, P. R., & Zajac, M. (2023). Fiat-Shamir Security of FRI and Related SNARKs. Cryptology ePrint Archive.
• [BFS20] Bünz, B., Fisch, B., Szepieniec, A. (2020). Transparent SNARKs from DARK Compilers. In: Canteaut, A., Ishai, Y. (eds) Advances in Cryptology — EUROCRYPT 2020. EUROCRYPT 2020. Lecture Notes in Computer Science(), vol 12105. Springer, Cham.
• [CHM+20] Chiesa, A., Hu, Y., Maller, M., Mishra, P., Vesely, N., Ward, N. (2020). Marlin: Preprocessing zkSNARKs with Universal and Updatable SRS. In: Canteaut, A., Ishai, Y. (eds) Advances in Cryptology — EUROCRYPT 2020. EUROCRYPT 2020. Lecture Notes in Computer Science(), vol 12105. Springer, Cham.
• [DMW+23] Dao, Q., Miller, J., Wright, O., & Grubbs, P. (2023). Weak Fiat-Shamir Attacks on Modern Proof Systems. Cryptology ePrint Archive.
• [H22] Haböck, U. (2022). A summary on the FRI low degree test. Cryptology ePrint Archive.
• [KPV22] Kattis, A. A., Panarin, K., & Vlasov, A. (2022, November). RedShift: transparent SNARKs from list polynomial commitments. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (pp. 1725–1737).ISO 690
• [Polygon] Plonky2: A deep dive (2022).• [S23] StarkWare (2023). ethSTARK Documentation v1.2. IACR Cryptol. ePrint Arch., 2023, 582. • StarkWare (2019). Low degree testing, the secret sauce of succinctness.• Thaler, J. (2023). 17 misconceptions about SNARKs (and why they hold us back)