xgboost icon indicating copy to clipboard operation
xgboost copied to clipboard

In-memory inputs for column split and vertical federated learning

Open rongou opened this issue 1 year ago • 7 comments

We've recently added support for column-wise data split (feature parallelism) and vertical federated learning (#8424), but the user interface in python is limited to text inputs and numpy arrays (#9365) only. We'd like to support other in-memory formats such as scipy sparse matrix, pandas data frame, cudf, and cupy.

One question is the meaning of passing in data_split_mode=COL. There are potentially two interpretations:

  • We assume each worker has access to the full dataset, passing in data_split_mode=COL would load the whole DMatrix, then split it by column according to the size of the cluster. The columns are split evenly into world_size slices, with each worker's rank determining which slice it gets. This is the approach currently used by the text inputs for feature parallel distributed training, but not for vertical federated learning.
  • We assume each worker only has access to a subset of the total number of columns, with column indices starting from 0 on every worker. The whole DMatrix is a union of all the columns from all the workers, with column indices re-indexed starting from worker 0. This is the approach currently used for vertical federated learning.

Now we want to support more in-memory inputs, it probably makes more sense to standardize on the second approach, since it seems wasteful to construct a DMatrix in memory and then slice it by column.

rongou avatar Sep 28 '23 22:09 rongou

@trivialfis

rongou avatar Sep 28 '23 22:09 rongou

Helps with #9472

rongou avatar Sep 28 '23 22:09 rongou

Let's focus on the federated learning use case and remove the data splitting in XGB entirely

trivialfis avatar Sep 28 '23 23:09 trivialfis

Sounds good. We'll standardize on the second approach, i.e. each worker only provides its own set of columns that are 0-indexed, and the global DMatrix is a union of all worker columns, re-indexed based on worker ranks.

rongou avatar Sep 29 '23 19:09 rongou

Excellent!

On 9/30/23 03:23, Rong Ou wrote:

Sounds good. We'll standardize on the second approach, i.e. each worker only provides its own set of columns that are 0-indexed, and the global DMatrix is a union of all worker columns, re-indexed based on worker ranks.

— Reply to this email directly, view it on GitHub https://github.com/dmlc/xgboost/issues/9619#issuecomment-1741379036, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD7YPKMYGFQ3WTB7PN5XXFTX44N33ANCNFSM6AAAAAA5LT2I7Y. You are receiving this because you were mentioned.Message ID: @.***>

--------------9PPYtvmG7ybnc0U29Ggd0Hgz Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit

Excellent!

On 9/30/23 03:23, Rong Ou wrote:
  <p dir="auto">Sounds good. We'll standardize on the second
    approach, i.e. each worker only provides its own set of columns
    that are 0-indexed, and the global DMatrix is a union of all
    worker columns, re-indexed based on worker ranks.</p>
  <p style="font-size:small;-webkit-text-size-adjust:none;color:#666;">—<br>
    Reply to this email directly, <a href="https://github.com/dmlc/xgboost/issues/9619#issuecomment-1741379036" moz-do-not-send="true">view it on GitHub</a>, or <a href="https://github.com/notifications/unsubscribe-auth/AD7YPKMYGFQ3WTB7PN5XXFTX44N33ANCNFSM6AAAAAA5LT2I7Y" moz-do-not-send="true">unsubscribe</a>.<br>
    You are receiving this because you were mentioned.<img src="https://github.com/notifications/beacon/AD7YPKMQZCQHWLNHUCOBQ4TX44N33A5CNFSM6AAAAAA5LT2I72WGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTHZNK5Y.gif" alt="" moz-do-not-send="true" width="1" height="1"><span style="color: transparent; font-size: 0; display: none;
      visibility: hidden; overflow: hidden; opacity: 0; width: 0;
      height: 0; max-width: 0; max-height: 0; mso-hide: all">Message
      ID: <span>&lt;dmlc/xgboost/issues/9619/1741379036</span><span>@</span><span>github</span><span>.</span><span>com&gt;</span></span></p>
  <script type="application/ld+json">[

{ @.": "http://schema.org", @.": "EmailMessage", "potentialAction": { @.": "ViewAction", "target": "https://github.com/dmlc/xgboost/issues/9619#issuecomment-1741379036", "url": "https://github.com/dmlc/xgboost/issues/9619#issuecomment-1741379036", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { @.": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

--------------9PPYtvmG7ybnc0U29Ggd0Hgz--

trivialfis avatar Sep 29 '23 19:09 trivialfis

@trivialfis another question is about labels, weights, and other metadata. When doing column split distributed training (non-federated), we assume this data is available on every worker. When loading data, do we also assume this information is loaded into every worker? If not, we'd have to broadcast it from, say, worker 0.

rongou avatar Oct 02 '23 18:10 rongou

When loading data, do we also assume this information is loaded into every worker?

I think this is a fair assumption.

trivialfis avatar Oct 08 '23 01:10 trivialfis