Skip to content

Run with ReLU+maxpooling #8

@Jason19991116

Description

@Jason19991116

Hello,

I encountered some issues while using the DaCapo benchmarks:

Currently, all the benchmarks adopt the SiLU+avgpooling scheme. Although DaCapo provides the model parameters for the ReLU+maxpooling scheme, I still failed to use the ReLU+maxpooling scheme. Could you guide me on how to adjust the code to correctly implement the ReLU+maxpooling scheme?

At present, DaCapo uses FHE parameters with a polynomial degree of 217. If I want to use FHE parameters with a polynomial degree of 216, how should I modify the code? Is it sufficient to only change the value of "nt"?

When modifying AlexNet, I made the following attempts:

model_dict = torch.load(str(source_dir)+"/../data/alexNet_silu_avgpool_model", map_location=torch.device('cpu'))
->
model_dict = torch.load(str(source_dir)+"/../data/alexNet_relu_maxpool_model", map_location=torch.device('cpu'))

return HE_SiLU(x)
->
return HE_ReLU(x)

return HE_Avg(close, x)
->
return HE_MaxPad(close, x)

"nt" : 216
->
"nt" : 2
15

I performed simulations with plaintext. However, I found that after the first MaxPooling operation, the plaintext data started to deviate from the PyTorch execution results.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions