FATE
FATE copied to clipboard
[Help] Confusion regarding the implementation of Federated Transfer Learning
Discussed in https://github.com/FederatedAI/FATE/discussions/5098
Originally posted by PaulKMandal August 29, 2023 I have questions regarding the specific implementation of Federated Transfer Learning (FTL) that FATE Provides. First and foremost, I have reviewed The Liu et al. paper on FTL (available here). I also reviewed the Liu et al. paper on Vertical Federated Learning (VFL) available here which also briefly covers FTL.
I am confused about the following: In both of these papers, FTL is an inherently vertical process. However, looking at both the example file available here and the implementation of HeteroFTL here, there is no bottom model, interactive layer, or top layer. It appears that the NUS_Wide example is just training the same neural network on different data. I'm confused about how this works.
I have also reviewed the implementation here but need additional clarity about how the code actually works, especially since it seems to work completely differently from how VFL is implemented.
Any help is much appreciated!
Tagging @talkingwallace since he's a cool and helpful guy.
Hi! The implementation of Hetero-FTL is the same as https://arxiv.org/pdf/1812.03337.pdf: It has different bottom models, the guest side has an interaction layer, and no top model. This algo only supports binary classification. FTL uses Keras because it was implemented in the early version of FATE. The "Dense(units=32, .." means that the output unit is 32, while the input unit depends on your input features. So actually the bottom model is not the same.
Hi! The implementation of Hetero-FTL is the same as https://arxiv.org/pdf/1812.03337.pdf: It has different bottom models, the guest side has an interaction layer, and no top model. This algo only supports binary classification. FTL uses Keras because it was implemented in the early version of FATE. The "Dense(units=32, .." means that the output unit is 32, while the input unit depends on your input features. So actually the bottom model is not the same.
I'm re-reading your guys' implementation. I have a few questions:
- Is the "initialize_nn" code does in ftl_base.py what sets up the bottom models in the federates?
- How feasible would it be to modify Hetero-FTL so that it works more similarly to Hetero-nn does currently?
- Hetero-FTL assume the first n-examples are aligned, right? Is there any functionality for cross-domain federated transfer learning similar to this?
I have a backlog of projects I'm working right now both for research and for work, but would be willing to help with an updated version of the hetero-FTL implementation or work on it myself once my schedule clears up.
1 Yes 2&3. While Hetero-FTL offers a variety of functionalities, it doesn't provide the same level of support for complex model customization as Hetero-NN does. Specifically, Hetero-FTL is built around the Keras Dense Layer architecture. In the NUS-wide example provided within FATE, both image and text data are converted into dense features for processing. Should you require specialized cross-domain data handling within your bottom model, it would likely necessitate modifying the source code of those bottom models.