gpt-neo topic

List gpt-neo repositories

finetune-gpt2xl

421
Stars
73
Forks
Watchers

Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed

quickai

162
Stars
16
Forks
Watchers

QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

gpt-neo-fine-tuning-example

166
Stars
47
Forks
Watchers

Fine-Tune EleutherAI GPT-Neo And GPT-J-6B To Generate Netflix Movie Descriptions Using Hugginface And DeepSpeed

Basic-UI-for-GPT-J-6B-with-low-vram

114
Stars
12
Forks
Watchers

A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.

Basic-UI-for-GPT-Neo-with-low-vram

35
Stars
5
Forks
Watchers

A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)

mtj-softtuner

27
Stars
20
Forks
Watchers

Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance

Promptify

3.1k
Stars
228
Forks
14
Watchers

Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research

SkyCode-AI-CodeX-GPT3

390
Stars
22
Forks
Watchers

SkyCode是一个多语言开源编程大模型,采用GPT3模型结构,支持Java, JavaScript, C, C++, Python, Go, shell等多种主流编程语言,并能理解中文注释。模型可以对代码进行补全,拥有强大解题能力,使您从编程中解放出...

codegen

186
Stars
34
Forks
Watchers

Salesforce codegen with web server

gpt-j-fine-tuning-example

65
Stars
18
Forks
Watchers

Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression