FedML
FedML copied to clipboard
Deployment of FedGKT on IoT devices and use AWS as server
I have few more questions, I am planning to deploy the FedGKT at IoT devices (Jetson nano, Tx2, and Xavier). I read the paper (fedGKT), where you have used cpu as edge (client) and gpu as server for aggregation. However, in the case of deploying the fedGKT at IoT devices, what kind of issues do you foresee. Although I will try my best to do it, as your IoT example cover nano and raspberry-pi (without fedGKT support), will that work on Xavier and Tx2 too ?and I will deploy server as AWS Cloud for testing in real environment to study the impact of lightweight models on bandwidth and latency . your guidance and recommendation is required. what steps shall I follow to do it smoothly?
hi @shanullah, did you succeed in the deployment of FedGKT on IoT devices?
Hello @shanullah and @KOUDA-AMINE were you able to run: https://github.com/FedML-AI/FedML/blob/master/python/fedml/simulation/mpi/fedgkt/FedGKTAPI.py ?
Also have a look here: https://github.com/FedML-AI/FedML/tree/master/iot