问题
Part of federated learning research is based on operations performed on the communications between the server and clients such as dropping part of the updates (drop some gradients describing a model) exchanged between clients and server or discarding an update from a specific client in a certain communication round. I want to know if such capabilities are supported by Tensorflow-federated (TFF) framework and how they are supported because, from a first look, it seems to me the level of abstraction of TFF API does not allow such operations. Thank you.
回答1:
TFF's language design intentionally avoids a notion of client identity; there is desire to avoid making a "Client X" addressable and discarding its update or sending it different data.
However, there may be a way to run simulations of the type of computations mentioned. TFF does support expressing the following:
Computations that condition on properties of tensors, for example ignore an update that has
nan
values. One way this could be accomplished would be by writing atff.tf_computation
that conditionally zeros out the weight of updates beforetff.federated_mean
. This technique is used in tff.learning.build_federated_averaing_process()Simulations that run a different computations on different sets of clients (where a set maybe a single client). Since the reference executor parameterizes clients by the data they posses, a writer of TFF could write two
tff.federated_computation
s, apply them to different simulation data, and combine the results.
来源:https://stackoverflow.com/questions/56050014/operations-performed-on-the-communications-between-the-server-and-clients