autoshow
autoshow copied to clipboard
End-to-end scripting workflow to automatically generate show notes from audio/video transcripts with Whisper.cpp, Llama.cpp, yt-dlp, and Commander.js
Autoshow
Outline
- Project Overview
- Key Features
- Setup
- Run Autoshow Node Scripts
- Project Structure
- Contributors
Project Overview
Autoshow automates the processing of audio and video content from various sources, including YouTube videos, playlists, podcast RSS feeds, and local media files. It performs transcription, summarization, and chapter generation using different language models (LLMs) and transcription services.
The Autoshow workflow includes the following steps:
- The user provides input (video URL, playlist, RSS feed, or local file).
- The system downloads the audio (if necessary).
- Transcription is performed using the selected service.
- A customizable prompt is inserted containing instructions for the contents of the show notes.
- The transcript is processed by the chosen LLM to generate show notes based on the selected prompts.
- Results are saved in markdown format with front matter.
Key Features
- Support for multiple input types (YouTube links, RSS feeds, local video and audio files)
- Integration with various:
- LLMs (ChatGPT, Claude, Gemini, Cohere, Mistral, Fireworks, Together, Groq)
- Transcription services (Whisper.cpp, Deepgram, Assembly)
- Local LLM support with Ollama
- Customizable prompts for generating titles, summaries, chapter titles/descriptions, key takeaways, and questions to test comprehension
- Markdown output with metadata and formatted content
- Command-line interface for easy usage
- WIP: Node.js server and React frontend
Setup
scripts/setup.sh checks to ensure a .env file exists, Node dependencies are installed, and the whisper.cpp repository is cloned and built. Run the script with the setup script in package.json.
npm run setup
Run Autoshow Node Scripts
Run on a single YouTube video.
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk"
Run on a YouTube playlist.
npm run as -- --playlist "https://www.youtube.com/playlist?list=PLCVnrVv4KhXPz0SoAVu8Rc1emAdGPbSbr"
Run on a list of arbitrary URLs.
npm run as -- --urls "content/example-urls.md"
Run on a local audio or video file.
npm run as -- --file "content/audio.mp3"
Run on a podcast RSS feed.
npm run as -- --rss "https://ajcwebdev.substack.com/feed"
Use local LLM.
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk" --ollama
Use 3rd party LLM providers.
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk" --chatgpt GPT_4o_MINI
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk" --claude CLAUDE_3_5_SONNET
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk" --gemini GEMINI_1_5_PRO
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk" --cohere COMMAND_R_PLUS
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk" --mistral MISTRAL_LARGE
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk" --fireworks
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk" --together
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk" --groq
Example commands for all available CLI options can be found in docs/examples.md.
Project Structure
-
Main Entry Points (
src/cli)commander.ts: Defines the command-line interface using Commander
-
Process Commands (
src/process-commands)file.ts: Handles local audio/video file processingvideo.ts: Handles single YouTube video processingurls.ts: Processes videos from a list of URLs in a fileplaylist.ts: Processes all videos in a YouTube playlistchannel.ts: Processes all videos from a YouTube channelrss.ts: Processes podcast RSS feeds
-
Process Steps (
src/process-steps)- Step 1 -
generate-markdown.tscreates initial markdown file with metadata - Step 2 -
download-audio.tsdownloads audio from YouTube videos - Step 3 -
run-transcription.tsmanages the transcription process - Step 4 -
select-prompt.tsdefines the prompt structure for summarization and chapter generation - Step 5 -
run-llm.tshandles LLM processing for selected prompts
- Step 1 -
-
Transcription Services (
src/transcription)whisper.ts: Uses Whisper.cpp for transcriptiondeepgram.ts: Integrates Deepgram transcription serviceassembly.ts: Integrates AssemblyAI transcription service
-
Language Models (
src/llms)ollama.ts: Integrations Ollama's locally available modelschatgpt.ts: Integrates OpenAI's GPT modelsclaude.ts: Integrates Anthropic's Claude modelsgemini.ts: Integrates Google's Gemini modelscohere.ts: Integrates Cohere's language modelsmistral.ts: Integrates Mistral AI's language modelsfireworks.ts: Integrates Fireworks's open source modelstogether.ts: Integrates Together's open source modelsgroq.ts: Integrates Groq's open source models
-
Utility Files (
src/utils)logging.ts: Reusable Chalk functions for logging colorsvalidate-option.ts: Functions for validating CLI options and handling errorsformat-transcript.ts: Transcript formatting functionsglobals.ts: Globally defined variables and constants
-
Types (
src/types)process.ts: Types forcommander.tsand files inprocess-commandsdirectoryllms.ts: Types forrun-llm.tsprocess step and files inllmsdirectorytranscription.ts: Types forrun-transcription.tsprocess step and files intranscriptiondirectory
-
Server (
src/server)index.ts: Initializes Fastify server with CORS support and defines API endpointsdb.ts: Sets up SQLite database connection and schema for storing show notes- API Routes (
src/server/routes)process.ts: Handles different types of media processing requests (video, playlist, RSS, etc.)show-note.ts: Retrieves individual show notes from the database by IDshow-notes.ts: Fetches all show notes from the database, ordered by date
- Server Utilities (
src/server/utils)req-to-opts.ts: Maps API request data to processing options for LLM and transcription services
Contributors
- ✨Hello beautiful human! ✨Jenn Junod host of Teach Jenn Tech