LogoLogo
LogoLogo
  • Introduction
    • About Us
    • Notations & Definitions
      • MPC
      • ZK
    • Contribute to this Site!
  • Primitives
    • Multiplication
      • Karatsuba Multiplication
      • Toom-Cook Multiplication
    • NAF (Non-adjacent form)
    • Chinese Remainder Theorem (CRT)
    • Euclidean Algorithm
      • Extended Euclidean Algorithm
      • Binary Euclidean Algorithm
      • Extended Binary Euclidean Algorithm
    • Coding Theory
      • Linear Code
    • Number Theoretic Transform
    • Abstract Algebra
      • Group
        • -Morphisms
        • Batch Inverse
      • Elliptic Curve
        • Weierstrass Curve
          • Coordinate Forms
          • Fast Elliptic Curve Arithmetic and Improved WEIL Pairing Evaluation
        • Edwards Curve
          • Coordinate Forms
          • Twisted Edwards ↔ Short Weierstrass Transformation
        • Batch Inverse for Batch Point Additions
        • Scalar Multiplication
          • Double-and-add
          • GLV Decomposition
        • MSM
          • Pippenger's Algorithm
          • Signed Bucket Index
          • CycloneMSM
          • EdMSM
          • cuZK
        • 2-Chain and 2-Cycle of Elliptic Curves
    • Encryption Scheme
      • ElGamal Encryption
    • Modular Arithmetic
      • Modular Reduction
        • Barrett Reduction
        • Montgomery Reduction
      • Modular Inverse
        • Bernstein-Yang's Inverse
    • Multiset Check
    • Sumcheck
    • Commitment Scheme
      • Fflonk
      • SHPlonk
      • Zeromorph
  • MPC
    • Yao's Garbled Circuits
    • GMW
    • BMR
  • ZK
    • Arithmetization
      • R1CS
      • PLONK
      • AIR
    • Folding
      • LatticeFold
      • Nova
        • Nova over Cycles of Curves
    • Lookup
      • Lasso
      • LogUp-GKR
    • SNARK
      • Groth16
      • HyperPlonk
      • Spartan
        • SPARK
    • STARK
      • Additive NTT
      • Basefold
      • Binius
      • Brakedown
      • CircleSTARK
      • FRI
        • FRI Security Features and Optimizations
      • DEEP FRI
      • STIR
      • WHIR
    • Distributed ZK
      • Ryan's Trick for Distributed Groth16
  • Application
    • zkLogin
    • zkHoldem
    • zkTLS
      • DECO
      • Proxying is enough
  • zkVM
Powered by GitBook
On this page
  • Introduction
  • Background
  • Folding Schemes: Motivation and Challenges
  • Module SIS problem
  • Ajtai Commitment
  • Protocol Explanation
  • Expansion
  • Decomposition
  • Fold
  • Conclusion
Export as PDF
  1. ZK
  2. Folding

LatticeFold

This article aims to intuitively explain the goals and processes of the LatticeFold protocol.

PreviousFoldingNextNova

Last updated 1 month ago

Introduction

is the first lattice-based folding scheme. Folding schemes require a Homomorphic Additive Commitment Scheme, which is why Pedersen commitments are commonly used. These elliptic curve-based commitments are vulnerable to quantum computing attacks, require large-sized fields, and rely on non-native field arithmetic for folding verification. In contrast, LatticeFold uses Ajtai commitments based on the Module SIS problem, which is known to be quantum-resistant, supports small-sized fields, and is cost-efficient to verify.

Background

Folding Schemes: Motivation and Challenges

A circuit is typically expressed as (x;w)(x; w)(x;w), where the public input x and witness w represent a certain relation R\mathcal{R}R. For example, a circuit representing w2=xw^2 = xw2=x could have relations such as (9;3)(9; 3)(9;3) and (16;4)(16; 4)(16;4) satisfying it. While public inputs act as constraints on the relation, since witnesses must be kept hidden, a ZK SNARK proof ensures the relation is held accordingly.

Fig 1. State machine with multiple steps

As shown in Fig. 1, some cases require expressing sequences of computations. For example, (216;28),(28;24),(24;22)(2^{16}; 2^8),(2^8; 2^4), (2^4; 2^2)(216;28),(28;24),(24;22) and (22;2)(2^2; 2)(22;2) represent the four steps in computing the square root of 2162^{16}216. Such computational sequences are common, with zkEVMs and zkVMs being other notable examples.

The computational expense of generating a SNARK proof for (x;w)(x; w)(x;w) is a major challenge. This inefficiency is compounded when there are multiple instances satisfying the relation, as generating separate proofs for each instance is costly. For example, creating two separate SNARK proofs for (x1;w1)(x_1; w_1)(x1​;w1​) and (x2;w2)(x_2; w_2)(x2​;w2​), both satisfying the same relation R\mathcal{R}R, can be prohibitively expensive.

In comparison, a folding scheme creates a single new pair (x3;w3)(x_3; w_3)(x3​;w3​) from (x1;w1)(x_1; w_1)(x1​;w1​) and (x2;w2)(x_2; w_2)(x2​;w2​) that also satisfies the relation R\mathcal{R}R. Then, the SNARK proof for the folded (x3;w3)(x_3; w_3)(x3​;w3​) not only validates the new relation but also proves the relations for the original pairs (x1;w1)(x_1; w_1)(x1​;w1​) and (x2;w2)(x_2; w_2)(x2​;w2​). Generating and verifying folding proofs is faster and cheaper than doing the same with SNARK proofs. Therefore, instead of recursively generating and verifying SNARK proofs, the process is optimized by using folding proofs for intermediate steps before creating a final SNARK proof. The example shown here demonstrates 2-to-1 folding, but some folding schemes support n-to-1 folding.\

Folding generally employs a technique called Random Linear Combination (RLC) where values are combined together with a random value. To ensure succinctness and to hide the witness, the prover uses commitments of the witness to apply RLC. This means that the commitment must support additive homomorphism.

Since FRI-based STARKs use Merkle Trees as commitments, which do not satisfy the additive homomorphism property, folding schemes cannot be used in FRI-based STARK systems.

Module SIS problem

A lattice refers to a combination of points following a repetitive pattern, defined as:

L(b1,b2,…,bn)={∑i=1nxibi:xi∈Z}L(\bm{b_1}, \bm{b_2}, \dots, \bm{b_n}) = \left\{ \sum_{i=1}^n x_i \bm{b_i} : x_i \in \mathbb{Z} \right\} L(b1​,b2​,…,bn​)={i=1∑n​xi​bi​:xi​∈Z}

Here, a set of {b1,b2,…,bn}∈Rd\{\bm{b_1}, \bm{b_2}, \dots, \bm{b_n}\} \in \mathbb{R}^d{b1​,b2​,…,bn​}∈Rd is called the basis, nnn is the rank, and ddd is the dimension, satisfying n≤dn \le dn≤d. The basis vectors must be linearly independent, defined as:

x1b1+x2b2+⋯+xnbn=0  ⟹  x1=x2=⋯=xn=0x_1\bm{b_1} + x_2\bm{b_2} + \cdots + x_n\bm{b_n} = 0 \implies x_1 = x_2 = \cdots = x_n = 0x1​b1​+x2​b2​+⋯+xn​bn​=0⟹x1​=x2​=⋯=xn​=0

A given lattice can be created with different bases. One notable problem in lattices is the Shortest Vector Problem (SVP), which asks for the shortest vector that can be formed given a basis. For instance:

b1=(64,218,133),b2=(71,205,111)b3=(28,−48,−84).Lattice ={x1b1+x2b2+x3b3∣x1,x2,x3∈Z}.\bm{b}_1 = (64, 218, 133), \quad \bm{b}_2 = (71, 205, 111) \quad \bm{b}_3 = (28, -48, -84). \\ \text{Lattice } = \{ x_1\bm{b}_1 + x_2\bm{b}_2 + x_3\bm{b}_3 \mid x_1, x_2, x_3 \in \mathbb{Z} \}.b1​=(64,218,133),b2​=(71,205,111)b3​=(28,−48,−84).Lattice ={x1​b1​+x2​b2​+x3​b3​∣x1​,x2​,x3​∈Z}.

Given a uniformly random integer matrix A∈Zqκ×m\bm{A} \in \mathbb{Z}_q^{\kappa \times m}A∈Zqκ×m​ and B∈ZB \in \mathbb{Z}B∈Z, find a non-zero integer vector (x∈Zm\bm{x} \in \mathbb{Z}^mx∈Zm) such that A⋅x=0∈Zqκ\bm{A} \cdot \bm{x} = 0 \in \mathbb{Z}^{\kappa}_qA⋅x=0∈Zqκ​ and 0<∥x∥≤B0 < \|\bm{x}\| \le B0<∥x∥≤B.

Ajtai demonstrated a worst-case-to-average-case reduction, proving that solving SIS implies solving SVP. The key insight here is that the random matrix A\bm{A}A allows uniform sampling of bases, retaining the complexity of hard lattice problems and making them practical for cryptographic applications. This was a pivotal step in advancing lattice-based cryptography.

The Module SIS (M-SIS) with BBB problem, a generalization of SIS, serves as the foundation for LatticeFold. It is defined as:

Ajtai Commitment

In LatticeFold, the matrix A\bm{A}A and witness vector w\bm{w}w create a commitment , which is called an Ajtai commitment. For a commitment scheme, the following properties are required:

  • Hiding: The original message cannot be inferred from the commitment. This is satisfied if A\bm{A}A is randomly sampled.

  • Binding: Each commitment maps to one original message with high probability. If the same commitment is created from two different x1x_1x1​ and x2x_2x2​, it is equivalent to solving the M-SIS with 2B2B2B, which is probabilistically very difficult.

A⋅x1=A⋅x2 for x1≠x2↔A⋅(x1−x2)=0 where 0<∥x1−x2∥≤2B\bm{A} \cdot \bm{x_1} = \bm{A} \cdot \bm{x_2} \text{ for } \bm{x_1} \ne \bm{x_2} \leftrightarrow \bm{A} \cdot (\bm{x_1} - \bm{x_2}) = 0 \text{ where } 0 < \| \bm{x_1} - \bm{x_2} \| \le 2BA⋅x1​=A⋅x2​ for x1​=x2​↔A⋅(x1​−x2​)=0 where 0<∥x1​−x2​∥≤2B
  • Compression: The commitment size is smaller than the original message. This is achieved if κ<m\kappa < mκ<m.

Additionally, unlike SIS, M-SIS leverages rings instead of fields, introducing challenges in proving knowledge soundness. Specifically, not all ring elements are invertible, and even if they are, their norms can become excessively large, necessitating solutions to address this.

The Ajtai commitment supports additive homomorphism, making it suitable for folding schemes, which use random linear combinations. To preserve the binding property, the norm of the random linear combination of two witness vectors must stay within a certain boundary BBB. If performed naively like shown below, the norm may become excessively large, exceeding .

A⋅w1+A⋅α⋅w2=A⋅(w1+α⋅w2), where 0<∥w1+α⋅w2∥<(1+α)⋅B\bm{A} \cdot \bm{w_1} + \bm{A} \cdot \alpha \cdot \bm{w_2} = \bm{A} \cdot (\bm{w_1} + \alpha \cdot \bm{w_2})\text{, } \text{where } 0 < \|\bm{w_1} + \alpha \cdot \bm{w_2}\| < (1 + \alpha) \cdot BA⋅w1​+A⋅α⋅w2​=A⋅(w1​+α⋅w2​), where 0<∥w1​+α⋅w2​∥<(1+α)⋅B

In LatticeFold, the L∞L ^{\infin}L∞(infinity norm) is used for norm calculations. For a vector v=[v1,v2,…,vn]\bm{v} = [v_1,v_2,\dots,v_n]v=[v1​,v2​,…,vn​], the infinity norm is defined as:

∥v∥∞=max⁡i∣vi∣\|\bm{v}\|_\infty = \max_{i} |v_i|∥v∥∞​=imax​∣vi​∣

From this point onward, we use the infinity norm even without an infinity symbol.

Protocol Explanation

LatticeFold is divided into three steps: Expansion, Decomposition, and Fold.

RcompB={(A⋅f⃗;f⃗):A∈Rqκ×m∧f⃗∈Rqm∧0≤∥f⃗∥<B }RaccB={(A⋅f⃗,r,v;f⃗):(A⋅f⃗;f⃗)∈RcompB∧r∈Rqlog⁡m∧v∈Rq∧mle[f^](r)=v}\mathcal{R}_{comp}^{B} = \{(\bm{A} \cdot \vec{f}; \vec{f}): \bm{A} \in R_q^{\kappa \times m} \land \vec{f} \in R_q^m \land 0 \le \|\vec{f}\| < B \ \} \\ \mathcal{R}_{acc}^{B} = \Big\{ \begin{aligned} &(\bm{A}\cdot \vec{f}, \bm{r}, v; \vec{f}) \end{aligned} : \begin{aligned} &(\bm{A}\cdot \vec{f}; \vec{f}) \in \mathcal{R}_{comp}^{B} \land \\ & \bm{r} \in R_q^{\log m} \land v \in R_q \wedge \mathsf{mle}[\hat{f}](\bm{r}) = v \end{aligned} \Big\}RcompB​={(A⋅f​;f​):A∈Rqκ×m​∧f​∈Rqm​∧0≤∥f​∥<B }RaccB​={​(A⋅f​,r,v;f​)​:​(A⋅f​;f​)∈RcompB​∧r∈Rqlogm​∧v∈Rq​∧mle[f^​](r)=v​}

The difference between the two relations lies with (r,v)(\bm{r}, v)(r,v). For simplicity, the public value xxx will be omitted.

Expansion

In the Expansion step, RcompB\mathcal{R}_{comp}^{B}RcompB​ ​ is transformed into RaccB\mathcal{R}_{acc}^{B}RaccB​ using the zero vector. This transformation is necessary because, in the Fold step, the structure must conform to the format of Racc\mathcal{R}_{acc}Racc​.

(A⋅f⃗;f⃗)→(A⋅f⃗,0,mle[f^](0);f⃗)(\bm{A} \cdot \vec{f}; \vec{f}) \rightarrow (\bm{A} \cdot \vec{f}, \bm{0}, \mathsf{mle}[\hat{f}](\bm{0}); \vec{f})(A⋅f​;f​)→(A⋅f​,0,mle[f^​](0);f​)

Decomposition

Remember that this takes 2×RaccB2 \times \mathcal{R}_{acc}^{B}2×RaccB​ as input and produces 2k×Raccb2k \times \mathcal{R}_{acc}^b2k×Raccb​​. See Fig 6. above

2×(A⋅f⃗i,ri,mle[f^i](ri);f⃗i)→2k×(A⋅f⃗j,rj,mle[f^j](rj);f⃗j)2 \times (\bm{A} \cdot \vec{f}_i, \bm{r_i}, \mathsf{mle}[\hat{f}_i](\bm{r_i}); \vec{f}_i) \rightarrow 2k \times (\bm{A} \cdot \vec{f}_j, \bm{r}_j, \mathsf{mle}[\hat{f}_j](\bm{r}_j); \vec{f}_j)2×(A⋅f​i​,ri​,mle[f^​i​](ri​);f​i​)→2k×(A⋅f​j​,rj​,mle[f^​j​](rj​);f​j​)
2×(A⋅wi,r⃗i,mle[w](r⃗i);wi)→2k×(A⋅wj,r⃗j,mle[wj](r⃗j);wj)2 \times (A \cdot w_i, \vec{r}_i, \mathsf{mle}[w](\vec{r}_i); w_i) \rightarrow 2k \times (A \cdot w_j, \vec{r}_j, \mathsf{mle}[w_j](\vec{r}_j); w_j)2×(A⋅wi​,ri​,mle[w](ri​);wi​)→2k×(A⋅wj​,rj​,mle[wj​](rj​);wj​)

The Decomposition proceeds through the steps below for each relation:

  1. The prover provides cm0,…,cmk−1,v0,…,vk−1\bm{cm_0}, \dots, \bm{cm_{k-1}}, v_0, \dots, v_{k-1}cm0​,…,cmk−1​,v0​,…,vk−1​ to the verifier, satisfying the following conditions:

f⃗=∑i=0k−1bi⋅f⃗i, where 0≤∥f⃗i∥<bcmi=A⋅f⃗ivi=mle[f^i](r)\vec{f} = \sum_{i = 0}^{k-1}b^i\cdot \vec{f}_i\text{, where } 0 \le \|\vec{f}_i\| < b \\ \bm{cm_i} = \bm{A} \cdot \vec{f}_i \\ v_i = \mathsf{mle}[\hat{f}_i](\bm{r})f​=i=0∑k−1​bi⋅f​i​, where 0≤∥f​i​∥<bcmi​=A⋅f​i​vi​=mle[f^​i​](r)
  1. The verifier checks the following conditions

A⋅f⃗=?∑i=0k−1bi⋅cmi ∧v=?∑i=0k−1bi⋅vi\bm{A} \cdot \vec{f} \stackrel{?}= \sum_{i=0}^{k-1}b^i \cdot \bm{cm_i} \space \land v \stackrel{?}= \sum_{i = 0}^{k-1}b^i\cdot v_iA⋅f​=?i=0∑k−1​bi⋅cmi​ ∧v=?i=0∑k−1​bi⋅vi​

If the conditions above are satisfied, each RaccB\mathcal{R}_{acc}^{B}RaccB​ is transformed into kkk instances of Raccb\mathcal{R}_{acc}^bRaccb​.

(A⋅f⃗,r,mle[f^](r);f⃗)→k×(A⋅f⃗i,r,mle[f^i](r);f⃗i)(\bm{A} \cdot \vec{f}, \bm{r}, \mathsf{mle}[\hat{f}](\bm{r}); \vec{f}) \rightarrow k \times (\bm{A} \cdot \vec{f}_i, \bm{r}, \mathsf{mle}[\hat{f}_i](\bm{r}); \vec{f}_i)(A⋅f​,r,mle[f^​](r);f​)→k×(A⋅f​i​,r,mle[f^​i​](r);f​i​)

By applying this to the 2×RaccB2 \times \mathcal{R}_{acc}^B2×RaccB​, the number increases from 2×RaccB2 \times \mathcal{R}_{acc}^B2×RaccB​ to 2k×Raccb2k \times \mathcal{R}_{acc}^b2k×Raccb​​. This is done to restrict the norm size to bbb, which should be smaller than BBB, preventing the norm from growing excessively during the Fold step. However, note that the range check for each wiw_iwi​ has not yet been performed.

Fold

To verify that the f⃗i\vec{f}_if​i​​ values from the previous step lie within the range (−b,b)(−b, b)(−b,b), the following polynomial will be used.

P(X)=∏i=−(b−1)b−1(X−i)P(X) = \prod_{i = -(b-1)}^{b- 1}(X - i)P(X)=i=−(b−1)∏b−1​(X−i)

If a value is sampled within (−b,b)(−b, b)(−b,b) and evaluated with P(X)P(X)P(X), the result must be 0 for the range check of f⃗i\vec{f}_if​i​ to succeed (This is used in Step 2 below.).

As mentioned earlier, it is also crucial to sample random values effectively. For this purpose, we sample from CsmallC_{\mathsf{small}}Csmall​, known as a strong sampling set. Even when multiplied by ρ\rhoρ sampled from this set, the norm increases by at most a factor of CCC, known as the expansion factor. If C⋅b⋅2k<BC \cdot b \cdot 2k < BC⋅b⋅2k<B, the norm of the folded witnesses will remain less than BBB. It can be formally described as:

∥∑i=02k−1ρi⋅f⃗i∥=∑i=02k−1∥ρi⋅f⃗i∥≤∑i=02k−1C⋅∥f⃗i∥≤∑i=02k−1C⋅b<B\|\sum_{i=0}^{2k-1}\rho_i \cdot \vec{f}_i\| = \sum_{i=0}^{2k-1}\|\rho_i \cdot \vec{f}_i\| \le \sum_{i=0}^{2k-1} C \cdot \| \vec{f}_i\| \le \sum_{i=0}^{2k-1} C \cdot b< B∥i=0∑2k−1​ρi​⋅f​i​∥=i=0∑2k−1​∥ρi​⋅f​i​∥≤i=0∑2k−1​C⋅∥f​i​∥≤i=0∑2k−1​C⋅b<B

This takes 2k×Raccb2k \times \mathcal{R}_{acc}^b2k×Raccb​ as input and produces RaccB\mathcal{R}_{acc}^BRaccB​.

2k×(A⋅f⃗i,ri,mle[f⃗i](r);f⃗i)→(A⋅f′⃗,rout,mle[f′^](rout);f′⃗),where f′⃗=∑i=02k−1ρi⋅f⃗i2k \times (\bm{A} \cdot \vec{f}_i, \bm{r_i}, \mathsf{mle}[\vec{f}_i](\bm{r}); \vec{f}_i) \rightarrow (\bm{A} \cdot \vec{f'}, \bm{r_{out}}, \mathsf{mle}[\hat{f'}](\bm{r_{out}}); \vec{f'}), \\ \text{where } \vec{f'} = \sum_{i=0}^{2k-1}\rho_i \cdot \vec{f}_i2k×(A⋅f​i​,ri​,mle[f​i​](r);f​i​)→(A⋅f′​,rout​,mle[f′^​](rout​);f′​),where f′​=i=0∑2k−1​ρi​⋅f​i​

The Fold proceeds through the steps below:

  1. The verifier samples α0,…,α2k−1,μ0,…,μ2k−1,β\alpha_0, \dots, \alpha_{2k-1}, \mu_0, \dots, \mu_{2k - 1}, \bm{\beta}α0​,…,α2k−1​,μ0​,…,μ2k−1​,β and sends them to the prover.

  2. The prover performs the sumcheck protocol to prove the following.

∑x∈{0,1}log⁡mg(x)=∑i=02k−1αi⋅viwhere g(x)=geval(x)+gnorm(x)geval(x)=∑i=02k−1αi⋅eq(ri,x)⋅mle[f^i](x)gnorm(x)=∑i=02k−1μi⋅eq(β,x)⋅∏j=−(b−1)b−1(mle[f^i](x)−j)\sum_{\bm{x} \in \{0, 1\}^{\log m}} g(\bm{x}) = \sum_{i=0}^{2k-1} \alpha_i \cdot v_i \\ \text{where } g(\bm{x}) = g_{\mathsf{eval}}(\bm{x}) + g_{\mathsf{norm}}(\bm{x}) \\ g_{\mathsf{eval}}(\bm{x}) = \sum_{i=0}^{2k-1}\alpha_i \cdot \mathsf{eq}(\bm{r_i}, \bm{x})\cdot \mathsf{mle}[\hat{f}_i](\bm{x}) \\ g_{\mathsf{norm}}(\bm{x}) = \sum_{i=0}^{2k-1}\mu_i \cdot \mathsf{eq}(\bm{\beta}, \bm{x})\cdot \prod_{j = -(b - 1)}^{b -1}(\mathsf{mle}[\hat{f}_i](\bm{x}) - j)x∈{0,1}logm∑​g(x)=i=0∑2k−1​αi​⋅vi​where g(x)=geval​(x)+gnorm​(x)geval​(x)=i=0∑2k−1​αi​⋅eq(ri​,x)⋅mle[f^​i​](x)gnorm​(x)=i=0∑2k−1​μi​⋅eq(β,x)⋅j=−(b−1)∏b−1​(mle[f^​i​](x)−j)
  1. At the end of the sumcheck protocol, the following evaluation claim is obtained:

g(rout)=?sg(\bm{r_{out}}) \stackrel{?}= sg(rout​)=?s
  1. The prover sends θ0,…,θ2k−1\theta_0, \dots, \theta_{2k - 1}θ0​,…,θ2k−1​ to the verifier, where:

θi=mle[f^i](rout)\theta_i = \mathsf{mle}[\hat{f}_i](\bm{r_{out}})θi​=mle[f^​i​](rout​)
  1. The verifier checks the following condition:

s=?∑i=02k−1αi⋅eq(ri,rout)⋅θi+μi⋅eq(β,rout)⋅∏j=−(b−1)b−1(θi−j)s \stackrel{?}= \sum_{i = 0}^{2k - 1}\alpha_i \cdot \mathsf{eq}(\bm{r_i}, \bm{r_{out}})\cdot \theta_i + \mu_i \cdot \mathsf{eq}(\bm{\beta}, \bm{r_{out}})\cdot \prod_{j = -(b - 1)}^{b-1}(\theta_i - j)s=?i=0∑2k−1​αi​⋅eq(ri​,rout​)⋅θi​+μi​⋅eq(β,rout​)⋅j=−(b−1)∏b−1​(θi​−j)
  1. The verifier samples ρ0,…,ρ2k−1\rho_0, \dots, \rho_{2k-1}ρ0​,…,ρ2k−1​ from CsmallC_{\mathsf{small}}Csmall​ and sends them to the prover. Using these, a new relation is created.

(A⋅f′⃗,rout,mle[f′^](rout);f′⃗), where f′⃗=∑i=02k−1ρi⋅f⃗i(\bm{A} \cdot \vec{f'}, \bm{r_{out}}, \mathsf{mle}[\hat{f'}](\bm{r_{out}}); \vec{f'})\text{, where } \vec{f'} = \sum_{i=0}^{2k-1}\rho_i \cdot \vec{f}_i(A⋅f′​,rout​,mle[f′^​](rout​);f′​), where f′​=i=0∑2k−1​ρi​⋅f​i​

Conclusion

Fig 2. Proof generation using recursion
Fig 3. Proof generation using folding scheme
Fig 4. 2-to-1 folding example
Fig 5. Examples of various bases, source:

Using the of all the basis vectors, the solution to the SVP for this basis is found to be (1,3,−1)(1,3,−1)(1,3,−1) formed by x1=−322,x2=323,x3=−83x_1 =−322, x_2=323, x_3=−83x1​=−322,x2​=323,x3​=−83 (). This problem becomes significantly harder with poorly chosen bases and higher dimensions, posing challenges even for quantum computers.

The Short Integer Solution (SIS) problem was introduced in, defined as:

Given R=Z[X]/(Xd+1)R = \mathbb{Z}[X] / (X^d + 1)R=Z[X]/(Xd+1) and Rq=R/qRR_q = R / qRRq​=R/qR, with a uniformly random matrix A∈Rqκ×m\bm{A} \in R^{\kappa \times m}_qA∈Rqκ×m​ and B∈RqB \in R_qB∈Rq​, find a non-zero vector x∈Rqm\bm{x} \in R^m_qx∈Rqm​ such that A⋅x=0∈Rqκ\bm{A} \cdot \bm{x} = \bm{0} \in R^\kappa _qA⋅x=0∈Rqκ​ and 0<∥x∥≤B0 < \|\bm{x} \| \le B0<∥x∥≤B. Here, Xd+1X^d + 1Xd+1 is a cyclotomic polynomial (), specifically one where ddd is a power of 2.

Fig 6. LatticeFold folding process, where comp stands for computation and acc stands for accumulation

This image above is taken from . As shown above, we first define the multilinear extension (MLE), and then define the following two relations.

This image above is taken from . As shown above, we first define the multilinear extension (MLE), and then define the following two relations.

Although not covered in detail here, the paper also discusses the use of extension fields to enable smaller field sizes and methods to apply this approach to ’s . However, some aspects of LatticeFold limit its application to Protostar's, leaving it an open problem. The significance of LatticeFold lies in it being the first to apply lattice-based cryptography to folding schemes, paving the way for incorporating lattice-based folding schemes into STARKs.

Written by from A41

L2L^2L2-norm
Reference
Ajtai’s 1999 paper
Reference
the LatticeFold paper
the LatticeFold paper
HyperNova
CCS
ryan Kim
LatticeFold
https://www.esat.kuleuven.be/cosic/blog/introduction-to-lattices/