Discoveries and hurdles along the way
In this section, we provide more detail on several topics that we encountered along the way. Some were unexpected challenges, while others are improvements that have already been applied for performance.
Parameterizing the circuit in Noir vs Circom
In Circom, the Semaphore template is parameterized as follows (ref):
template Semaphore(MAX_DEPTH) { .. }
In Noir, we are using a hardcoded global
that has to be overwritten to be adjusted (ref):
#![allow(unused)] fn main() { // This value can get updated by overwriting this line. pub global MAX_DEPTH: u32 = 10; }
From discussions in Discord with the Noir team, we concluded this would be the cleanest way for the time being. The Noir circuit is present in the semaphore-noir
packages, but not used directly; instead, a set of precompiled circuits in the snark-artifacts repo is used or the user passes the compiled circuit themselves. In case of the user directly passing the Noir circuit, they would be able to set the value as desired, either by hardcoding it or by updating it via a script.
Another option suggested by the team is to create a workspace with different packages for the different depths. In the case of Semaphore where depth varies between 1 and 32 we felt this would be too verbose in comparison to the chosen approach.
For testing the Noir program in TypeScript, we use a simple replace function to adjust the global
to the desired value.
For context; in Circom using a parameterized template generates a part of a circuit based on the passed value for the parameter, and this is composable within a larger circuit (repo reference).
UltraHonk proof generated by the Barretenberg JS doesn't match the proof data required by the Solidity verifier
The proof data is generated by the Barretenberg backend as:
let ProofData = UltraHonkBackend.generateProof(witness, { keccak: true })
It has 4 bytes of prepended data compared to the proof required by the Solidity verifier contract. We currently have to remove the first 4 bytes in ProofData before passing it to the verification contract. (Note: Based on the comments we saw on Discord, it seems like the issue is fixed in v0.82. Currently we are using v0.72.1. We will update the version and test it in the future versions.)
Verifier contract too large
Semaphore provides 32 verification keys for circuits of merkle tree depths 1 to 32. In the original contract, all keys are stored in a single bytes
type, and the correct keys set are returned based on the input depth. However for UltraHonk, the verifier contract itself is very large and cannot include additional functions to select keys based on the depth; this exceeds the 24KB contract size limit set by the EVM (even when using the Solidity optimizer). Additionally, the verification keys for UltraHonk are also too large to store in a single contract.
Currently, our design is to split the Noir verifier into one main verifier contract and three external libraries ---- one for controlling verification keys based on depth, and two for storing the verification keys. At present, we store the necessary addresses in the main verifier contract and call them through delegatecall()
. We plan to refactor the contracts to avoid using delegatecall
and also to optimize for lower gas consumption.
Usage of multithreading to improve SDK benchmarks
To improve the performance of generating and verifying proofs with Barretenberg backend, we utilize the multithreading function built in the SDK:
new UltraHonkBackend(bytecode, { threads: nrThreads })
We updated the proving and verifying functions in the Semaphore SDK to accept an optional thread count parameter. In this way we leverage multithreading in both browser and node environments and have been able to improve benchmarking.
Improvement of Noir circuit
The Noir team immediately spotted some low hanging fruit for optimization in the following snippet of the circuit:
#![allow(unused)] fn main() { let index_bits: [u1; 32] = indexes.to_le_bits(); let mut node = identityCommitment; for i in 0..MAX_DEPTH { if i < merkleProofLength { let sibling = hashPath[i]; if index_bits[i] == 0 { node = poseidon([node, sibling]); } else { node = poseidon([sibling, node]); } } } }
- The array
index_bits
could actually be parameterised byMAX_DEPTH
- Avoid expensive operation call in a conditional, in this case the
poseidon
hash, by lifting it out of theif else
#![allow(unused)] fn main() { let index_bits: [u1; MAX_DEPTH] = indexes.to_le_bits(); let mut node = identityCommitment; for i in 0..MAX_DEPTH { if i < merkleProofLength { let sibling = hashPath[i]; let (left, right) = if index_bits[i] == 0 { (node, sibling) } else { (sibling, node) }; node = poseidon([left, right]); } } }
For MAX_DEPTH
10, this brought down the gate count from ~25k to ~16k.