Skip to content

Latest commit

 

History

History

gnn_fhe_pipeline

GNN-FHE-Pipeline

This directory contains the code for my Final Year Project, where privacy-preserving Fully Homomorphic Encryption over the Torus (TFHE) is applied to a Graph Neural Network (GNN) model using Concrete-ML, specifically, the GIN model.

The GNN code is a modified version of the IBM/Multi-GNN repository. You can refer to my previous repository at fabecode/GNN-FHE to view the initial commits, additions, and modifications made to the code before migrating it here.

Setup

To use the repository, you first need to install the conda environment via

conda env create -f env.yml python=3.9

Note that concrete-ml requires torch 1.13.1 to work.

Quick Command

To run the GIN with FHE model directly, you could run below from the gnn_fhe_pipeline directory.

python main.py --data HI-Small_Balanced_Formatted --model gin_fhe --fhe

Dataset

The data needed for the experiments can be found on Kaggle. To use this data with the provided training scripts, you first need to perform a pre-processing step:

python format_kaggle_files.py '/path/to/kaggle-file/' '/path/to/output-file/'

Make sure to change the filepath at the beginning of the data_loading.py file to wherever you stored your formatted csv file generated by the pre-processing step.

Detailed Usage

To run the experiments you need to run the main.py function and specify any arguments you want to use. There are two required arguments, namely --data and --model.

For the --data argument, specify the file name, e.g --data Small_HI. The --model parameter should be set to any of the model classed that are available, i.e. to one of --model [gin, gin_fhe gat, rgcn, pna]. Thus, to run a standard GNN, you need to run, e.g.:

python main.py --data HI-Small_Formatted --model gin

To run a GNN on FHE, add the --fhe argument, e.g.:

python main.py --data HI-Small_Formatted --model gin_fhe --fhe

Then you can add different adaptations to the models by selecting the respective arguments from:

Column 1 Column 2
--emlps Edge updates via MLPs
--reverse_mp Reverse Message Passing
--ego Ego ID's to the center nodes
--ports Port Numberings for edges
Thus, to run Multi-GIN with edge updates, you would run the following command:
python main.py --data Small_HI --model gin --emlps --reverse_mp --ego --ports

Licence

Apache License Version 2.0, January 2004