Michael Potter

Results 6 comments of Michael Potter

Yes it would be ideal to postprocess this output from the '''make_srl_string''' method ![image](https://user-images.githubusercontent.com/35242331/101795500-d531b780-3abc-11eb-9cce-782a6be2d1cf.png) Into clean triples (for example I do not want bio-tags like the arg-modifier in a triple...

Did you pass in the path to the config file to the python script such as: `python torch_fedavg_mnist_lr_step_by_step_example.py --cf=/home/mpotter/FedML/python/examples/simulation/sp_fedavg_mnist_lr_example/fedml_config.yaml`

Hi, if you noticed for the `torch_fadavg_mnist_lr_one_line_example.py` , it calls `run_simulation(backend="single_process")` in the `__init__.py` file of fedml (https://github.com/FedML-AI/FedML/blob/master/python/fedml/__init__.py) . The `run_simulation(backend="single_process")` sets ``` global _global_training_type _global_training_type = "simulation" global _global_comm_backend...

I am using fedml 0.7.24 My config file ``` common_args: training_type: "simulation" random_seed: 0 data_args: dataset: "mnist" data_cache_dir: "../../../data/mnist" partition_method: "hetero" partition_alpha: 0.5 model_args: model: "lr" train_args: federated_optimizer: "FedAvg" client_id_list:...

@shizhouxing the exponent should be a learnable parameter that changes during training. There are two specifications I would like for k: a scalar parameter, or a neural network.

Accidentally closed. Sorry.