grok-1 icon indicating copy to clipboard operation
grok-1 copied to clipboard

Is there a scientific paper?

Open NatanFreeman opened this issue 1 year ago • 10 comments

Is there a scientific paper accompanying this release? I've searched but couldn't find one. I find it odd that the weights would be released but not the research.

NatanFreeman avatar Mar 17 '24 21:03 NatanFreeman

It would be nice to have:

  • Paper explaining the methodology
  • Benchmarks
  • Data this was trained on

Explosion-Scratch avatar Mar 17 '24 21:03 Explosion-Scratch

A model card would make sense. If there weren't any new per-say techniques then I wouldn't see the need for yet another paper. If there is, then sure!

JudeDavis1 avatar Mar 17 '24 21:03 JudeDavis1

I'm happy as long as the code is up to date and the science is released even if not in an academic setting.

JudeDavis1 avatar Mar 17 '24 22:03 JudeDavis1

A model card would make sense. If there weren't any new per-say techniques then I wouldn't see the need for yet another paper. If there is, then sure!

Agree that model card (what data Grok was trained on) is crucial for this being truly "open source"

It looks like Grok had a model card back from last November: https://x.ai/model-card/

Training data The training data used for the release version of Grok-1 comes from both the Internet up to Q3 2023 and the data provided by our AI Tutors.

I doubt we'll get an answer any more detailed than "the Internet" and "whatever synthetic data our employees made"

AlexanderPuckhaber avatar Mar 17 '24 22:03 AlexanderPuckhaber

Is there a scientific paper accompanying this release? I've searched but couldn't find one. I find it odd that the weights would be released but not the research.

Because there's no research underlying it? Nothing new or surprising in the model so far, it's just the same architecture as other MoE LLMs with different data and training compute. Not every software needs a paper that's going to be rejected at conferences and stay at pre-print stage on ArXiV. :)

Qu3tzal avatar Mar 18 '24 00:03 Qu3tzal

Is there a scientific paper accompanying this release? I've searched but couldn't find one. I find it odd that the weights would be released but not the research.

Because there's no research underlying it? Nothing new or surprising in the model so far, it's just the same architecture as other MoE LLMs with different data and training compute. Not every software needs a paper that's going to be rejected at conferences and stay at pre-print stage on ArXiV. :)

Disagree. I think @Explosion-Scratch did a good job pointing out why a paper would be useful in this case.

NatanFreeman avatar Mar 18 '24 08:03 NatanFreeman

That's a technical report at best though.

Qu3tzal avatar Mar 18 '24 23:03 Qu3tzal

That's a technical report at best though.

Call it what you want, the issue is that it doesn't exist.

NatanFreeman avatar Mar 19 '24 18:03 NatanFreeman

Absolutely needs some experiment details on μTransfer of a MOE model that large, if someone noticed several 'weird' multiplier here. https://github.com/xai-org/grok-1/blob/7050ed204b8206bb8645c7b7bbef7252f79561b0/run.py#L31-L47

yzlnew avatar Mar 26 '24 07:03 yzlnew

image

AsureDay avatar Apr 26 '24 16:04 AsureDay