tensorflow federated learning checkpoint

允我心安 提交于 2020-01-14 06:36:37

问题


I am studying a federated_learning_for_image_classification.ipynb with tensorflow federated API.

In the example, I could check each simulated clients train Accuracy, Loss and Total accuracy, Total loss.

But there are no checkpoint files.

I want to make each client checkpoint file and total checkpoint files.

And then compare the client parameter variables and total parameter variables.

Anyone can help me to make checkpoint file in federated_learning_for_image_classification.ipynb example?


回答1:


One question to ask is whether you want to compare the variables within TFF (as part of the federated computation) or post-hoc/outside TFF (analyzing within Python).

Modifying the tff.utils.IterativeProcess construction performed by tff.learning.build_federated_averaging_process may be a good way to go. In fact, I'd recommend forking the simplified implementation on GitHub at tensorflow_federated/python/research/simple_fedavg/simple_fedavg.py, rather than digging into tff.learning.

Changing the line that performs a tff.fedetated_mean on the updates from the clients to a tff.federated_collect will will give a list of all the client's models that can then be compared to the global model.

Example:

client_deltas = tff.federated_collect(client_outputs.weights_delta)

@tff.tf_computation(server_state.model.type_signature,
                    client_deltas.type_signature)
def compare_deltas_to_global(global_model, deltas):
  for delta in deltas:
    # do something with delta vs global_model 

tff.federated_apply(compare_deltas_to_global, (server_state.model, client_deltas))


来源:https://stackoverflow.com/questions/58247978/tensorflow-federated-learning-checkpoint

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!