FedConcat

December 16, 2023 ยท View on GitHub

This is source code for Exploiting Label Skew in Federated Learning with Model Concatenation.

An example running script is in run.sh.

ParameterDescription
modelThe model architecture. Options: simple-cnn, vgg, resnet, mlp. Default = mlp.
datasetDataset to use. Options: mnist, cifar10, fmnist, svhn, generated, femnist, a9a, rcv1, covtype. Default = mnist.
algThe training algorithm. Options: fedconcat.
lrLearning rate for the local models, default = 0.01.
batch-sizeBatch size, default = 64.
epochsNumber of local training epochs, default = 5.
n_partiesNumber of parties, default = 2.
rhoThe parameter controlling the momentum SGD, default = 0.
n_clustersNumber of clusters.
encoder_roundNumber of communication rounds for training encoders in each cluster.
classifier_roundNumber of communication rounds for training the global classifier.
partitionThe partition way. Options: homo, noniid-labeldir, noniid-#label1 (or 2, 3, ..., which means the fixed number of labels each party owns)
betaThe concentration parameter of the Dirichlet distribution for heterogeneous partition, default = 0.5.
deviceSpecify the device to run the program, default = cuda:0.
datadirThe path of the dataset, default = ./data/.
logdirThe path to store the logs, default = ./logs/.
init_seedThe initial seed, default = 0.

Citation

If you find this repository useful, please cite our paper:

@article{diao2023exploiting,
  title={Exploiting Label Skews in Federated Learning with Model Concatenation},
  author={Diao, Yiqun and Li, Qinbin and He, Bingsheng},
  journal={arXiv preprint arXiv:2312.06290},
  year={2023}
}