The primary motivation for performing Partial Witness Reduction and Partial FFT is to partition the data structures A,B,C,z across ddd devices. By doing so, each device only stores a fraction of the full data, significantly reducing memory overhead and enabling large-scale proving even when the entire witness cannot fit into a single GPU’s memory.
In Partial Witness Reduction and Partial Inverse FFT, the matrices A,B,C are typically sparse, making it difficult to precisely estimate the memory savings from their partitioning. However, the vector z is dense and evenly split across ddd devices, so its memory overhead is reduced by a factor of d.
Protocol
Assuming m+1 is divisible by d (the number of devices), the polynomials can be decomposed as follows:
In protocols like DIZK, distributing FFT requires 3 communications per FFT, which leads to substantial overhead. However, according to data from ICICLE-Snark, the dominant cost in Groth16 proving is MSM, not FFT.
Therefore, we choose not to split the FFT, and instead perform a Full FFT after reconstructing. This design reduces communication complexity while focusing optimization efforts on the actual bottleneck.
Protocol
After the partial inverse FFT, each device performs a Reduce operation to reconstruct the full polynomials a(X),b(X),c(X).
We use Reduce instead of AllReduce for the Full FFT step in order to minimize data communication cost. As a result, a single device is responsible for performing the element-wise multiplication and the forward FFT on the fully reduced polynomials.
Example
Suppose we have 4 devices, and each device holds polynomials a(k)(X),b(k)(X),c(k)(X) for k=0,1,2,3. To proceed with proof generation, every device must eventually obtain the full sums:
Each MSM can be intuitively split across devices. When performing the 5 MSMs involved in Groth16 proving, both the scalar inputs zi and hi are partitioned across ddd devices, reducing their memory footprint by a factor of d. As a result, the corresponding base points used in each MSM are also reduced proportionally, leading to a significant decrease in per-device memory usage and enabling more efficient multi-GPU computation.
The first four MSMs can be computed independently. The last one requires a precomputed hi, which each device can hold after the Send & Recv operations.
Example
const int field_element_size = 32; // 32 bytes per BN254 field element
const int num_coeffs_per_gpu = ...; // e.g., 2^23 / num_gpus
const size_t chunk_size = num_coeffs_per_gpu * field_element_size;
if (rank == 0) {
// `full_poly_data` points to the full polynomial coefficients (uint8_t*)
for (int i = 0; i < num_gpus; ++i) {
if (i ! = rank) {
uint8_t* chunk_ptr = full_poly_data + i * chunk_size;
ncclSend(chunk_ptr, chunk_size, ncclUint8, i, comm, stream);
}
}
} else {
// `my_poly_chunk` will hold this GPU's assigned coefficients
ncclRecv(my_poly_chunk, chunk_size, ncclUint8, 0, comm, stream);
}
Put Together
In the following expressions, red highlights indicate the parts that must be reduced across devices. The remaining parts can be computed on the host to finalize the Groth16 proof:
In protocols like DIZK, distributing FFT requires many communications per FFT. In contrast, our protocol incurs less communications only during the Reduce, Send and Recv steps, while the subsequent AllReduce step involves only a constant number of group elements. As a result, the total communication cost is significantly lower than that of DIZK. Moreover, this distribution strategy can be efficiently implemented using SPMD.
Communication Step
Communication Cost
Reduce
3×(d−1)×n
Send & Recv
(d−1)×dn
AllReduce
O(d)
DIZK's AllToAll
3×3×2×(d−1)×n
Analysis: RISC0 Groth16 proof
Currently, the stark_verify.circom used in RISC0 has the following characteristics:
> circom --r1cs stark_verify.circom
template instances: 349
non-linear constraints: 5676573
linear constraints: 0
public inputs: 0
private inputs: 25749 (22333 belong to witness)
public outputs: 5
wires: 5635930
labels: 10298490
Written successfully: ./stark_verify.r1cs
Everything went okay
The number of rows in the A,B,C matrices is 5,676,573, which means an SRS of size 223 is required. The number of columns is 5,635,930.
Reduce Analysis
We now analyze the cost of the first and most expensive communication step in our distributed system: the Reduce operation.
Assume 4 GPUs each compute and store partial results of the polynomials a(k)(X),b(k)(X),c(k)(X), and together, they must aggregate them into full polynomials of size 223 over the BN254 field. The total volume of data involved is:
3 polynomials per GPU × 256 MB each to be sent to the leader GPU = 768 MB per GPU
Total communication volume on receiver side of leader GPU = 3 GPUs × 768 MB = 2304 MB
The goal of Reduce is to sum these partial polynomials in the leader device, so that 1 leader GPU retains the complete 768 MB result (i.e., full a(X),b(X),c(X)).
Computation Cost
Field additions in BN254 are extremely lightweight on GPUs. Each operation for ai(X),bi(X),ci(X) takes less than 1 ms and is negligible in the overall runtime.
We chose PCIe Gen4 x16 because it offers a well-balanced trade-off between performance, cost, and ecosystem stability. Gen4 provides up to 32 GB/s bidirectional bandwidth, which is sufficient for most real-world proving workloads, especially when combined with smart overlapping strategies between computation and communication.
While bandwidth can become a performance bottleneck in some high-throughput scenarios, upgrading to newer generations like PCIe Gen5 or Gen6 introduces significant trade-offs in cost, complexity, and platform requirements. For now, Gen4 remains the most practical and widely supported option, but we remain open to adopting higher PCIe generations if communication overhead proves to be a critical limiting factor.
Performance Summary
Task
Time
Field addition
< 3 ms
PCIe communication (Reduce)
92 ~ 106 ms
Total Execution Time
95 ~ 109 ms
According to ICICLE-Snark, a polynomial of degree 223 takes approximately 774 ms for MSM. With four devices, each handling a degree 221 polynomial, the per-device MSM time is around 193 ms. If we overlap the Reduce step with two of these MSMs, the communication overhead can be effectively hidden.
Estimation
Let’s perform a simple estimation based on the data from ICICLE-Snark.
Assumptions
n and m are 222
The runtime for MSM over G2 is the same as that for G1.
The runtime for G1 MSM scales linearly with the degree.
The total proving time for Groth16 is the sum of the time taken for 5 MSMs (387 ms), 1 FFT ( 10 ms), and 1 IFFT (10 ms).
The time for Reduce, Recv and Send is negligible.
Computation
If everything is computed serially, the total time is:
5×387+10+10=1955ms
If we instead use the proposed scheme across 4 GPUs, the time becomes:
5×(387/4)+10+10≈504ms
This shows that the proving time is reduced by approximately a factor of 4.
Input Size
We do not include A,B,C in our input size estimation, as their sparsity makes it difficult to quantify preciesly. However, they will also contribute to reducing the overall memory requirement.
If everything is computed on a single device, the input size is around 1664 MB. Each component will consume memory as follows:
Witness vector z: 222×32 B =128 MB
MSM base point ([ai(x)]1)i=0m: 222×64 B =256 MB
MSM base point ([bi(x)]1)i=0m: 222×64 B =256 MB
MSM base point ([bi(x)]2)i=0m: 222×128 B =512 MB
MSM base point ([δβai(x)+αbi(x)+ci(x)]1)i=ℓ+1m: 222×64 B ≈256 MB
MSM base point ([δL′2i+1(x)]1)i=0n−2: 222×64 B ≈256 MB
If we instead use the proposed scheme across 4 GPUs, the input size becomes around 416 MB.
Runtime Memory Size
Here, we estimate only the memory required for MSM itself, excluding the additional memory needed for buckets in the Pippenger algorithm.
If everything is computed on a single device, and intermediate memory is released immediately after use, the main bottleneck becomes the MSM in G2, which requires 128+512=640 MB of memory.
However, with the proposed scheme using 4 GPUs, this memory requirement is reduced to 32+128=160 MB. The Full FFT, including twiddle factors, requires an additional 256 MB. Therefore, under this setup, the total memory required per device is approximately 256 MB.