Ryan's Trick for Distributed Groth16
Please read Groth16 beforehand!
Bottlenecks
These are the three most computation-intensive parts that must be distributed:
1. Witness Reduction
This is required to compute :
2. FFT / Inverse FFT
This is needed to compute :
3. MSM
This is used to compute , involving five MSMs, one of which depends on :
Partial Witness Reduction + Partial Inverse FFT
Motivation
The primary motivation for performing Partial Witness Reduction and Partial FFT is to partition the data structures across ddd devices. By doing so, each device only stores a fraction of the full data, significantly reducing memory overhead and enabling large-scale proving even when the entire witness cannot fit into a single GPU’s memory.
In Partial Witness Reduction and Partial Inverse FFT, the matrices are typically sparse, making it difficult to precisely estimate the memory savings from their partitioning. However, the vector is dense and evenly split across ddd devices, so its memory overhead is reduced by a factor of .
Protocol
Assuming is divisible by (the number of devices), the polynomials can be decomposed as follows:
Thus, matrices and vectors can be partitioned into parts:
Examples
Suppose we are given the following matrices , and vector :
Then the polynomials are constructed as follows:
Full FFT
Motivation

In protocols like DIZK, distributing FFT requires 3 communications per FFT, which leads to substantial overhead. However, according to data from ICICLE-Snark, the dominant cost in Groth16 proving is MSM, not FFT. Therefore, we choose not to split the FFT, and instead perform a Full FFT after reconstructing. This design reduces communication complexity while focusing optimization efforts on the actual bottleneck.
Protocol
After the partial inverse FFT, each device performs a Reduce
operation to reconstruct the full polynomials .
We use Reduce
instead of AllReduce
for the Full FFT step in order to minimize data communication cost. As a result, a single device is responsible for performing the element-wise multiplication and the forward FFT on the fully reduced polynomials.
Example
Suppose we have 4 devices, and each device holds polynomials for . To proceed with proof generation, every device must eventually obtain the full sums:
Partial MSM
Motivation
Each MSM can be intuitively split across devices. When performing the 5 MSMs involved in Groth16 proving, both the scalar inputs and are partitioned across ddd devices, reducing their memory footprint by a factor of . As a result, the corresponding base points used in each MSM are also reduced proportionally, leading to a significant decrease in per-device memory usage and enabling more efficient multi-GPU computation.
Protocol
Assuming is divisible by :
The first four MSMs can be computed independently. The last one requires a precomputed , which each device can hold after the Send
& Recv
operations.
Example
const int field_element_size = 32; // 32 bytes per BN254 field element
const int num_coeffs_per_gpu = ...; // e.g., 2^23 / num_gpus
const size_t chunk_size = num_coeffs_per_gpu * field_element_size;
if (rank == 0) {
// `full_poly_data` points to the full polynomial coefficients (uint8_t*)
for (int i = 0; i < num_gpus; ++i) {
if (i ! = rank) {
uint8_t* chunk_ptr = full_poly_data + i * chunk_size;
ncclSend(chunk_ptr, chunk_size, ncclUint8, i, comm, stream);
}
}
} else {
// `my_poly_chunk` will hold this GPU's assigned coefficients
ncclRecv(my_poly_chunk, chunk_size, ncclUint8, 0, comm, stream);
}
Put Together
In the following expressions, red highlights indicate the parts that must be reduced across devices. The remaining parts can be computed on the host to finalize the Groth16 proof:

In protocols like DIZK, distributing FFT requires many communications per FFT. In contrast, our protocol incurs less communications only during the Reduce
, Send
and Recv
steps, while the subsequent AllReduce
step involves only a constant number of group elements. As a result, the total communication cost is significantly lower than that of DIZK. Moreover, this distribution strategy can be efficiently implemented using SPMD.
Reduce
Send
& Recv
AllReduce
DIZK's AllToAll
Analysis: RISC0 Groth16 proof
Currently, the stark_verify.circom used in RISC0 has the following characteristics:
> circom --r1cs stark_verify.circom
template instances: 349
non-linear constraints: 5676573
linear constraints: 0
public inputs: 0
private inputs: 25749 (22333 belong to witness)
public outputs: 5
wires: 5635930
labels: 10298490
Written successfully: ./stark_verify.r1cs
Everything went okay
The number of rows in the matrices is 5,676,573, which means an SRS of size is required. The number of columns is 5,635,930.
Reduce Analysis
We now analyze the cost of the first and most expensive communication step in our distributed system: the Reduce
operation.
Assume 4 GPUs each compute and store partial results of the polynomials , and together, they must aggregate them into full polynomials of size over the BN254 field. The total volume of data involved is:
3 polynomials per GPU × 256 MB each to be sent to the leader GPU = 768 MB per GPU
Total communication volume on receiver side of leader GPU = 3 GPUs × 768 MB = 2304 MB
The goal of Reduce
is to sum these partial polynomials in the leader device, so that 1 leader GPU retains the complete 768 MB result (i.e., full ).
Computation Cost
Field additions in BN254 are extremely lightweight on GPUs. Each operation for takes less than 1 ms and is negligible in the overall runtime.
Communication Cost
PCIe Transfer Speed
Interface
PCIe Gen4 x16
Theoretical Bandwidth
32 GB/s
Measured Bandwidth (via NCCL)
24 ~ 28 GB/s
Receive time by the leader GPU
2304 MB / (24 ~ 28) GB/s ≈ 82 ~ 96 ms
Estimated with NCCL overhead
92 ~ 106 ms
We chose PCIe Gen4 x16 because it offers a well-balanced trade-off between performance, cost, and ecosystem stability. Gen4 provides up to 32 GB/s bidirectional bandwidth, which is sufficient for most real-world proving workloads, especially when combined with smart overlapping strategies between computation and communication.
While bandwidth can become a performance bottleneck in some high-throughput scenarios, upgrading to newer generations like PCIe Gen5 or Gen6 introduces significant trade-offs in cost, complexity, and platform requirements. For now, Gen4 remains the most practical and widely supported option, but we remain open to adopting higher PCIe generations if communication overhead proves to be a critical limiting factor.
Performance Summary
Field addition
< 3 ms
PCIe communication (Reduce
)
92 ~ 106 ms
Total Execution Time
95 ~ 109 ms
According to ICICLE-Snark, a polynomial of degree takes approximately 774 ms for MSM. With four devices, each handling a degree polynomial, the per-device MSM time is around 193 ms. If we overlap the Reduce
step with two of these MSMs, the communication overhead can be effectively hidden.
Estimation
Let’s perform a simple estimation based on the data from ICICLE-Snark.
Assumptions
and are
The runtime for MSM over is the same as that for .
The runtime for MSM scales linearly with the degree.
The total proving time for Groth16 is the sum of the time taken for 5 MSMs (387 ms), 1 FFT ( 10 ms), and 1 IFFT (10 ms).
The time for
Reduce
,Recv
andSend
is negligible.
Computation
If everything is computed serially, the total time is:
If we instead use the proposed scheme across 4 GPUs, the time becomes:
This shows that the proving time is reduced by approximately a factor of 4.
Input Size
We do not include in our input size estimation, as their sparsity makes it difficult to quantify preciesly. However, they will also contribute to reducing the overall memory requirement.
If everything is computed on a single device, the input size is around MB. Each component will consume memory as follows:
Witness vector : B MB
MSM base point : B MB
MSM base point : B MB
MSM base point : B MB
MSM base point : B MB
MSM base point : B MB
If we instead use the proposed scheme across 4 GPUs, the input size becomes around MB.
Runtime Memory Size
Here, we estimate only the memory required for MSM itself, excluding the additional memory needed for buckets in the Pippenger algorithm.
If everything is computed on a single device, and intermediate memory is released immediately after use, the main bottleneck becomes the MSM in , which requires MB of memory.
However, with the proposed scheme using 4 GPUs, this memory requirement is reduced to MB. The Full FFT, including twiddle factors, requires an additional MB. Therefore, under this setup, the total memory required per device is approximately MB.
Last updated