Rong Ou
Rong Ou
Helps with #9472
Sounds good. We'll standardize on the second approach, i.e. each worker only provides its own set of columns that are 0-indexed, and the global DMatrix is a union of all...
@trivialfis another question is about labels, weights, and other metadata. When doing column split distributed training (non-federated), we assume this data is available on every worker. When loading data, do...
> @rongou Is using `rank{}` as process name instead of host name a deliberate choice for federated learning? Yes, in a federated setting, a participant may not want to expose...
> One question for @rongou > > > Therefore conceptually, we need to "move" the server-client split up, such that client only performs gradient collection, and server will perform the...
I think this is an additional mode of operation. For people who don't need encryption, the current approach for vertical federated learning is much more efficient.
With the proposed training approach, the active party would have the full model, right? But for inference, you assume it won't have the cut values. So how does one go...
This is beyond what the SecureBoost paper is doing, right? Agreed it's a bit tricky since we don't have a single global model any more.
Ah I see it does talk about the lookup tables, which need to be part of the model on each passive party.