This repository was archived by the owner on Nov 6, 2020. It is now read-only.
forked from ebfull/phase2
-
Notifications
You must be signed in to change notification settings - Fork 6
This repository was archived by the owner on Nov 6, 2020. It is now read-only.
Parallelize param deserialization #14
Copy link
Copy link
Open
Description
Currently, deserialization (with and without subgroup checks) is single threaded. Deserializing the large param vectors in parallel would speed up file reads considerably when subgroup checking is enabled (check = true).
Something like the following pseudocode (however rayon chunks may be better):
fn MPCSmall::read_small(path: &str, check: bool) -> io::Result<Self> {
let mut file = File::open(path);
let delta_g1 = read_g1(&mut file, check);
let delta_g2 = read_g2(&mut file, check);
let h_len = read_u32(&mut file);
let chunk_size = h_len / num_cpus();
let h_offset = G1_SIZE + G2_SIZE + LEN_SIZE;
// Launch `n_threads` number of threads, where each thread reads `chunk_size` number of points from the h vector
let h: Vec<G1Affine> = (0..num_cpus())
.into_par_iter()
.flat_map(|i| {
let offset = h_offset + i * G1_SIZE;
let mut reader = BufReader::new(File::open(path));
reader.seek(offset);
let mut h_chunk = vec![];
for _ in 0..chunk_size {
h_chunk.push(read_g1(&mut reader, check));
}
h_chunk
})
.collect();
// Do the same for the l vector
let l_len = read_u32(&mut file);
let chunk_size = l_len / num_cpus();
let l_offset = h_offset + h_len * G1_SIZE + LEN_SIZE;
let l: Vec<G1Affine> = ... ;
...
}
fn MPCSmall::read_large(path: &str, check: bool) -> io::Result<Self> {
// Do the same thing, but skip over the large params fields.
}Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels