adapters
                                
                                
                                
                                    adapters copied to clipboard
                            
                            
                            
                        LLaMA-Adapter
There is a new adapter called LLaMA-Adapter, a lightweight adaption method for fine-tuning instruction-following LLaMA models fire, using 52K data provided by Stanford Alpaca.
Open source status
- The model implementation is available in the github repo
 - The model weights are partially available: Variants of LLaMa are available, e.g. gpt4all GPTQ-for-LLaMa. The weights LLaMA-Adapter aren't available.
 - Authors of LLaMA-Adapter are @ZrrSkywalker @csuhan @lupantech