readme-ai
readme-ai copied to clipboard
README file generator, powered by large language model APIs ๐พ
README-AI
Automated README
file generator, powered by large language model APIs
Table of Contents
- ๐ Overview
- ๐พ Demo
- ๐งฉ Features
- ๐๏ธ Examples
-
๐ Getting Started
- โ๏ธ Installation
- ๐ค Usage
- ๐งช Tests
- ๐ฆ Configuration
- ๐ญ Roadmap
- ๐งโ๐ป Contributing
- ๐ License
๐ Overview
Objective
Readme-ai is a developer tool that auto-generates README.md files using a combination of data extraction and generative ai. Simply provide a repository URL or local path to your codebase and a well-structured and detailed README file will be generated for you.
Motivation
Streamlines documentation creation and maintenance, enhancing developer productivity. This project aims to enable all skill levels, across all domains, to better understand, use, and contribute to open-source software.
[!IMPORTANT]
Readme-ai is currently under development with an opinionated configuration and setup. It is vital to review all generated text from the LLM API to ensure it accurately represents your project.
๐พ Demo
Standard CLI Usage:
Offline Mode Demonstration:
[!TIP]
Offline mode is useful for generating a boilerplate README at no cost. View the offline README.md example here!
๐งฉ Features
Flexible README Generation
Readme-ai uses a balanced approach to building README files, combining data extraction and generative AI to create comprehensive and informative documentation.
- Data Extraction & Analysis: File parsers and analyzers are used to extract project metadata, dependencies, and other relevant details. This data is used to both populate many sections of the README, as well as provide context to the LLM API.
- Generative Content: For more abstract or creative sections, readme-ai uses LLM APIs to generate content that is both informative and engaging. This includes sections such as a project slogan, overview, features table, and file summaries.
CLI Customization
Over a dozen CLI options are available to customize the README generation process:
- LLM Options: Run the tool with OpenAI, Ollama, Google Gemini, or in offline mode.
- Offline Mode: Generate a README without making API calls. Readme-ai is still able to populate a significant portion of the README using metadata collected during preprocessing.
- Project Badges: Choose from an array of badge styles, colors, and alignments.
- Project Logo: Select from the default set, upload your own, or let the LLM give it a try!
A few examples of the CLI options in action:
![]() default output (no options provided to cli)
|
|
![]() --alignment left --badge-style flat-square --image cloud
|
![]() --alignment left --badge-style flat --image gradient
|
![]() --badge-style flat --image custom
|
![]() --badge-style skills-light --image grey
|
![]() --badge-style flat-square
|
![]() --badge-style flat --image black
|
See the Configuration section for a complete list of CLI options.
๐ Overview
Overview
|
![]() |
๐งฉ Features
Features Table
|
![]() |
๐ Codebase Documentation
Repository Structure
|
![]() |
File Summaries
|
![]() |
๐ Quickstart Commands
Getting Started
Install , Usage , and Test guides are supported for many languages.
|
![]() |
๐ฐ Contributing Guidelines
Contributing Guide
|
![]() |
Additional Sections
Project Roadmap , Contributing Guidelines , License , and Acknowledgements are included by default.
|
![]() |
๐จ Templates (wip)
README Template for ML & Data
|
|
๐๏ธ Examples
Output File | Input Repository | Input Contents | |
---|---|---|---|
โน | readme-python.md | readme-ai | Python |
โน | readme-google-gemini.md | readme-ai | Python |
โน | readme-typescript.md | chatgpt-app-react-ts | TypeScript, React |
โน | readme-postgres.md | postgres-proxy-server | Postgres, Duckdb |
โน | readme-kotlin.md | file.io-android-client | Kotlin, Android |
โน | readme-streamlit.md | readme-ai-streamlit | Python, Streamlit |
โน | readme-rust-c.md | rust-c-app | C, Rust |
โน | readme-go.md | go-docker-app | Go |
โน | readme-java.md | java-minimal-todo | Java |
โน | readme-fastapi-redis.md | async-ml-inference | FastAPI, Redis |
โน | readme-mlops.md | mlops-course | Python, Jupyter |
โน | readme-local.md | Local Directory | Flink, Python |
๐ Getting Started
System Requirements:
- Python 3.9+
- Package manager/Container:
pip
,pipx
,docker
- LLM service:
OpenAI
,Ollama
,Google Gemini
,Offline Mode
Repository URL or Local Path:
Make sure to have a repository URL or local directory path ready for the CLI.
Choosing an LLM Service:
- OpenAI: Recommended, requires an account setup and API key.
- Ollama: Free and open-source, potentially slower and more resource-intensive.
- Google Gemini: Requires a Google Cloud account and API key.
- Offline Mode: Generates a boilerplate README without making API calls.
โ๏ธ Installation
Using pip
pip install readmeai
[!TIP]
Use pipx to install and run Python command-line applications without causing dependency conflicts with other packages!
Using docker
docker pull zeroxeli/readme-ai:latest
Using conda
conda install -c conda-forge readmeai
From source
Clone and Install
Clone repository and change directory.
$ git clone https://github.com/eli64s/readme-ai $ cd readme-ai
Using bash
$ bash setup/setup.sh
Using poetry
$ poetry install
-
Similiary you can use
pipenv
orpip
to install the requirements.txt.
๐ค Usage
Environment Variables
Using OpenAI
Set your OpenAI API key as an environment variable.
# Using Linux or macOS $ export OPENAI_API_KEY=<your_api_key> # Using Windows $ set OPENAI_API_KEY=<your_api_key>
Using Ollama
Set Ollama local host as an environment variable.
$ export OLLAMA_HOST=127.0.0.1 $ ollama pull mistral:latest # llama2, etc. $ ollama serve # run if not using the Ollama desktop app
For more details, check out the Ollama repository.
Using Google Gemini
Set your Google Cloud project ID and location as environment variables.
$ export GOOGLE_API_KEY=<your_api_key>
Run the CLI
Using pip
# Using OpenAI API readmeai --repository https://github.com/eli64s/readme-ai --api openai # Using Ollama local model readmeai --repository https://github.com/eli64s/readme-ai --api ollama --model mistral
Using docker
docker run -it \ -e OPENAI_API_KEY=$OPENAI_API_KEY \ -v "$(pwd)":/app zeroxeli/readme-ai:latest \ -r https://github.com/eli64s/readme-ai
Using streamlit
Try directly in your browser on Streamlit, no installation required! For more details, check out the readme-ai-streamlit repository.
From source
Usage
Using bash
$ conda activate readmeai $ python3 -m readmeai.cli.main -r https://github.com/eli64s/readme-ai
Using poetry
$ poetry shell $ poetry run python3 -m readmeai.cli.main -r https://github.com/eli64s/readme-ai
๐งช Tests
Using pytest
$ make pytest
Using nox
$ nox -f noxfile.py
[!TIP]
Use nox to test application against multiple Python environments and dependencies!
๐ฆ Configuration
Customize the README file using the CLI options below.
Option | Type | Description | Default Value |
---|---|---|---|
--alignment , -a |
String | Align the text in the README.md file's header. | center |
--api |
String | LLM API service to use for text generation. | offline |
--badge-color |
String | Badge color name or hex code. | 0080ff |
--badge-style |
String | Badge icon style type. | see below |
--base-url |
String | Base URL for the repository. | v1/chat/completions |
--context-window |
Integer | Maximum context window of the LLM API. | 3999 |
--emojis , -e |
Boolean | Adds emojis to the README.md file's header sections. | False |
--image , -i |
String | Project logo image displayed in the README file header. | blue |
๐ง --language |
String | Language for generating the README.md file. | en |
--model , -m |
String | LLM API to use for text generation. | gpt-3.5-turbo |
--output , -o |
String | Output file name for the README file. | readme-ai.md |
--rate-limit |
Integer | Maximum number of API requests per minute. | 5 |
--repository , -r |
String | Repository URL or local directory path. | None |
--temperature , -t |
Float | Sets the creativity level for content generation. | 0.9 |
๐ง --template |
String | README template style. | default |
--top-p |
Float | Sets the probability of the top-p sampling method. | 0.9 |
--tree-depth |
Integer | Maximum depth of the directory tree structure. | 2 |
--help |
Displays help information about the command and its options. |
๐ง feature under development
Badge Customization
The --badge-style
option lets you select the style of the default badge set.
Style | Preview |
---|---|
default | |
flat | |
flat-square | |
for-the-badge | |
plastic | |
skills | |
skills-light | |
social |
When providing the --badge-style
option, readme-ai does two things:
- Formats the default badge set to match the selection (i.e. flat, flat-square, etc.).
- Generates an additional badge set representing your projects dependencies and tech stack (i.e. Python, Docker, etc.)
Example
$ readmeai --badge-style flat-square --repository https://github.com/eli64s/readme-ai
Output
{... project logo ...}
{... project name ...}
{...project slogan...}
![]()
![]()
![]()
![]()
Developed with the software and tools below.
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
{... end of header ...}
Project Logo
Select a project logo using the --image
option.
blue | gradient | black |
![]() |
||
cloud | purple | grey |
![]() |
![]() |
![]() |
For custom images, see the following options:
- Use
--image custom
to invoke a prompt to upload a local image file path or URL. - Use
--image llm
to generate a project logo using a LLM API (OpenAI only).
๐ญ Roadmap
- [ ] Add new CLI options to enhance README file customization.
- [X]
--api
Integrate singular interface for all LLM APIs (OpenAI, Ollama, Gemini, etc.) - [ ]
--audit
to review existing README files and suggest improvements. - [ ]
--template
to select a README template style (i.e. ai, data, web, etc.) - [ ]
--language
to generate README files in any language (i.e. zh-CN, ES, FR, JA, KO, RU)
- [X]
- [ ] Develop robust documentation generator to build full project docs (i.e. Sphinx, MkDocs)
- [ ] Create community-driven templates for README files and gallery of readme-ai examples.
- [ ] GitHub Actions script to automatically update README file content on repository push.
๐ Changelog
๐งโ๐ป Contributing
To grow the project, we need your help! See the links below to get started.
๐ License
๐ Acknowledgments
Return