open_llama icon indicating copy to clipboard operation
open_llama copied to clipboard

OpenLLaMA can quickly learn how to code

Open jorgemcgomes opened this issue 1 year ago • 4 comments

I know it's mentioned in the readme of the repo that this model apparently can't code because of the spaces that are merged. And this has been discussed in #40 .

However, I did some fine-tuning on the 3B model using the "fixed" tokenizer by @danielhanchen https://huggingface.co/danielhanchen/open_llama_3b and with use_fast=True. This tokenizer encodes multiple spaces as multiple space tokens, it doesn't get rid of them as the "official" tokenizer.

My fine-tuning dataset includes very little code, as I wasn't really trying to do that. It's just a small part of the instructions in the instruct datasets I used. But then I noticed this in one output of the model. Lo and behold, perfectly indented python code.

class CraftingSystem:
    def __init__(self):
        super().__init__()
        self.items = []

    def add_item(self, item):
        self.items.append(item)

    def get_all_items(self):
        return self.items

    def get_item_name(self, item):
        return item[0]

    def get_item_description(self, item):
        return item[1]

A lot of people out there simply repeating that OpenLLaMA is useless for code, but that doesn't seem to be the case provided the tokenizer configuration is fixed, and a little bit of fine-tuning is done.

jorgemcgomes avatar Jun 29 '23 20:06 jorgemcgomes

Great news, thanks for sharing!

snichols avatar Jun 30 '23 14:06 snichols

It would be interesting if a LoRA could be trained so that one could just apply the LoRA without needing to fine-tune the model. That LoRA may also be able to be applied to other OpenLLaMA-derived models.

derekelkins avatar Jul 01 '23 03:07 derekelkins

Check out our OpenLLaMA v2 model, which is pretrained with a lot of code. The official release of that will happen very soon.

young-geng avatar Jul 07 '23 07:07 young-geng

@jorgemcgomes Oh kinda forgot to reply here! @young-geng Congrats on the new release of v2! Trying it out right now :) Can see both the multiple spaces issue is fixed AND the fast tokenizer is fixed in the Huggingface base repo! (the thermal example you provided) Good work!

danielhanchen avatar Jul 08 '23 09:07 danielhanchen