long_llama icon indicating copy to clipboard operation
long_llama copied to clipboard

How is the contrastive data pipeline implemented?

Open MarkYangjiayi opened this issue 1 year ago • 8 comments

Hi, I saw in the paper mentioning that C_curr and C_prev from the same document in the batch, but didn't really see how this is implemented.

It seems that in the data_processing part of the code, each time the processor just samples from a new piece of data, how does it guarantee that the next batch of data will have same context in different steps? Thanks

MarkYangjiayi avatar Aug 27 '23 16:08 MarkYangjiayi

I have the same question. I guess maybe use the same data process in the Memorizing Transformers(Figure 3)?

hxs91 avatar Aug 28 '23 03:08 hxs91

As mentioned in the readme the instruction fine-tuning does not use FoT. In fact, it can be thought of as a "modified" FoT with cross_batch=1 because:

  • We take the document and randomly pad it (left, right) so that it has 2048 tokens
  • Then we load the document to the model, and as the last_context_length is 1024, a part of the document will be loaded to memory and constitute as C_prev

However, this is not the implementation that was used to create the base models. We plan to release the official FoT large scale continual pre-training (FoT finetuning) code within two weeks (this code will be in JAX).

CStanKonrad avatar Aug 29 '23 13:08 CStanKonrad

@MarkYangjiayi As described Appendix A.2 in FoT paper, maybe FoT does not need the same data process pipeline in Memorizing Transformers. C_curr and C_prev don't represent by batch, instead they represent by segments(vertical) within batch, this can explain two statements in FoT paper:

  1. "FOT does not use memory during training, while MT does."
  2. "FOT does not require long documents in the training set, while MT does in order to capture long dependencies in memory"

If it is correct, how is the data process of FoT? does FoT split long doc into multiple subsequences like Memorizing Transformers thus training can utilize data in one long doc as much as possible? or it just perform truncation and padding for every single doc? @CStanKonrad

hxs91 avatar Sep 06 '23 04:09 hxs91

Have there been any developments about “ official FoT large scale continual pre-training (FoT finetuning) code

HuXinjing avatar Sep 13 '23 05:09 HuXinjing

As mentioned in the readme the instruction fine-tuning does not use FoT. In fact, it can be thought of as a "modified" FoT with cross_batch=1 because:

  • We take the document and randomly pad it (left, right) so that it has 2048 tokens
  • Then we load the document to the model, and as the last_context_length is 1024, a part of the document will be loaded to memory and constitute as C_prev

However, this is not the implementation that was used to create the base models. We plan to release the official FoT large scale continual pre-training (FoT finetuning) code within two weeks (this code will be in JAX).

It's been almost two weeks, how's the plan on releasing the FoT pipeline? Still looking forward to seeing the actual implementation of the cross batched contrastive learning FoT.

NickGao96 avatar Sep 15 '23 02:09 NickGao96

@hxs91 My hypothesis is that FoT is using a similar training strategy to Recurrent Memory Transformer, if you want to train a local context of 2k with 4 segments, you enter 8k tokens and split it in the training loop.

MarkYangjiayi avatar Sep 18 '23 09:09 MarkYangjiayi

@hxs91 My hypothesis is that FoT is using a similar training strategy to Recurrent Memory Transformer, if you want to train a local context of 2k with 4 segments, you enter 8k tokens and split it in the training loop.

Yeah, I realize that if put different segments in different batch they are not differentiable, which is inconsistent with the description in FoT paper.

hxs91 avatar Sep 20 '23 10:09 hxs91

I apologize for the late response and delay in the publication of the continued pre-training code. The FoT continued pre-training code is now available here. A brief explanation of this implementation can be found here.

CStanKonrad avatar Sep 22 '23 17:09 CStanKonrad