mini-agi icon indicating copy to clipboard operation
mini-agi copied to clipboard

Keep hitting token limit

Open remriel opened this issue 1 year ago • 4 comments

Is there a way to handle this? The program keeps stopping because it exceeds token limit

PS C:\Users\Remy\mini-agi> python miniagi.py "Scrape the comments of the top posts in /r/Midjourney on Reddit and output to a text file"
Working directory is G:\miniagi
←[36mMiniAGI: Scrape the comments of the top posts in /r/Midjourney on Reddit and output to a text file.
Cmd: web_scrape, Arg: "https://www.reddit.com/r/Midjourney/top/"←[0m
←[35mCritic: The web_scrape command should specify the element to be scraped, such as comments or titles. Also, the output file should be specified in the command.←[0m
←[36mMiniAGI: Scrape the comments of the top posts in /r/Midjourney on Reddit and output to a text file.
Cmd: web_scrape, Arg: "comments\nhttps://www.reddit.com/r/Midjourney/top/\noutput_file: m..."←[0m
←[31mExecution error: <urlopen error unknown url type: comments
https>←[0m
←[36mMiniAGI: Scrape the comments of the top 5 posts in /r/Midjourney on Reddit and output to a text file.
Cmd: web_scrape, Arg: "comments\nhttps://www.reddit.com/r/Midjourney/top/\noutput_file: t..."←[0m
←[31mExecution error: <urlopen error unknown url type: comments
https>←[0m
←[36mMiniAGI: Check if the required Python packages for web scraping are installed.
Cmd: execute_shell, Arg: "pip freeze | grep beautifulsoup4 && pip freeze | grep requests"←[0m
←[35mCritic: Command should be execute_python instead of execute_shell, and use the command "import beautifulsoup4, requests" instead of pip freeze to check if packages are installed.←[0m
←[36mMiniAGI: Check if the required Python packages for web scraping are installed.
Cmd: execute_python, Arg: "# Import required packages\nimport beautifulsoup4\nimport requests..."←[0m
←[35mCritic: The package name should be "bs4" instead of "beautifulsoup4". Fix this error.←[0m
←[36mMiniAGI: Check if the Reddit API credentials are set up.
Cmd: execute_python, Arg: "# Import required package\nimport praw\n# Test if API credentials ..."←[0m
←[36mMiniAGI: Scrape the comments of the top 10 posts in /r/Midjourney on Reddit and output to a text file.
Cmd: web_scrape, Arg: "comments\nhttps://www.reddit.com/r/Midjourney/top/\noutput_file: t..."←[0m
←[35mCritic: The command is missing authentication credentials for accessing the Reddit API. Add the necessary credentials to the command.←[0m
←[36mMiniAGI: Scrape the comments of the top 20 posts in /r/Midjourney on Reddit and output to a text file.
Cmd: web_scrape, Arg: "comments\nhttps://www.reddit.com/r/Midjourney/top/\noutput_file: t..."←[0m
←[35mCritic: The command should use the praw package and its Reddit API wrapper to scrape the comments, rather than web scraping. Also, the authentication information should be stored securely and not included in the command itself.←[0m
←[36mMiniAGI: Scrape the comments of the top 15 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[31mExecution error: <urlopen error unknown url type: subreddit = reddit.subreddit("midjourney")
top_posts = subreddit.top(limit=15)
comments = []
for post in top_posts>←[0m
←[36mMiniAGI: Scrape the titles of the top 15 posts in /r/Midjourney on Reddit and output to a text file.
Cmd: web_scrape, Arg: "titles\nhttps://www.reddit.com/r/Midjourney/top/\noutput_file: top..."←[0m
←[35mCritic: The command should specify to scrape comments instead of titles. Please change "titles" to "comments" in the command.←[0m
←[36mMiniAGI: Scrape the comments of the top 25 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[35mCritic: The agent should check if the Reddit API credentials are set up before executing the web_scrape command. Add this check before executing the command.←[0m
←[36mMiniAGI: Check if the Reddit API credentials are set up before executing the web_scrape command.
Cmd: execute_python, Arg: "# Import required package\nimport praw\n# Test if API credentials ..."←[0m
←[36mMiniAGI: Scrape the comments of the top 30 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[31mExecution error: <urlopen error unknown url type: subreddit = reddit.subreddit("midjourney")
top_posts = subreddit.top(limit=30)
comments = []
for post in top_posts>←[0m
←[36mMiniAGI: Check if the Reddit API credentials are set up before executing the web_scrape command.
Cmd: execute_python, Arg: "# Import required package\nimport praw\n# Test if API credentials ..."←[0m
The agent repeated a previous command. Retrying...
←[36mMiniAGI: Scrape the comments of the top 40 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[31mExecution error: <urlopen error unknown url type: subreddit = reddit.subreddit("midjourney")
top_posts = subreddit.top(limit=40)
comments = []
for post in top_posts>←[0m
←[36mMiniAGI: Scrape the comments of the top 50 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[31mExecution error: <urlopen error unknown url type: subreddit = reddit.subreddit("midjourney")
top_posts = subreddit.top(limit=50)
comments = []
for post in top_posts>←[0m
←[36mMiniAGI: Scrape the comments of the top 60 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[31mExecution error: <urlopen error unknown url type: subreddit = reddit.subreddit("midjourney")
top_posts = subreddit.top(limit=60)
comments = []
for post in top_posts>←[0m
←[36mMiniAGI: Check if the Reddit API credentials are set up before executing the web_scrape command.
Cmd: execute_python, Arg: "# Import required package\nimport praw\n# Test if API credentials ..."←[0m
The agent repeated a previous command. Retrying...
←[36mMiniAGI: Scrape the comments of the top 75 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[31mExecution error: <urlopen error unknown url type: subreddit = reddit.subreddit("midjourney")
top_posts = subreddit.top(limit=75)
comments = []
for post in top_posts>←[0m
←[36mMiniAGI: Check if the Reddit API credentials are set up before executing the web_scrape command.
Cmd: execute_python, Arg: "# Import required package\nimport praw\n# Test if API credentials ..."←[0m
The agent repeated a previous command. Retrying...
←[36mMiniAGI: Scrape the comments of the top 100 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[31mExecution error: <urlopen error unknown url type: subreddit = reddit.subreddit("midjourney")
top_posts = subreddit.top(limit=100)
comments = []
for post in top_posts>←[0m
←[36mMiniAGI: Check if the Reddit API credentials are set up before executing the web_scrape command.
Cmd: execute_python, Arg: "# Import required package\nimport praw\n# Test if API credentials ..."←[0m
The agent repeated a previous command. Retrying...
←[36mMiniAGI: Scrape the comments of the top 150 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[31mExecution error: <urlopen error unknown url type: subreddit = reddit.subreddit("midjourney")
top_posts = subreddit.top(limit=150)
comments = []
for post in top_posts>←[0m
←[36mMiniAGI: Scrape the comments of the top 200 posts in /r/Midjourney on Reddit and output to a text file using praw.
Cmd: web_scrape, Arg: "subreddit = reddit.subreddit("Midjourney")\ntop_posts = subreddit..."←[0m
←[31mExecution error: <urlopen error unknown url type: subreddit = reddit.subreddit("midjourney")
top_posts = subreddit.top(limit=200)
comments = []
for post in top_posts>←[0m
←[36mMiniAGI: The agent should check if the Reddit API credentials are set up before executing the web_scrape command.
Cmd: execute_python, Arg: "# Import required package\nimport praw\n# Test if API credentials ..."←[0m
←[35mCritic: The agent should include a check for the existence of the output file before executing the web_scrape command.←[0m
←[36mMiniAGI: The agent should check if the output file exists before executing the web_scrape command.
Cmd: execute_shell, Arg: "test -f top_200_midjourney_comments.txt || touch top_200_midjour..."←[0m
←[35mCritic: The command can be improved by adding a condition to check if the file already exists before creating it.←[0m
←[36mMiniAGI: The agent should check if the Reddit API credentials are set up before executing the web_scrape command.
Cmd: execute_python, Arg: "# Import required package\nimport praw\n# Test if API credentials ..."←[0m
←[31mExecution error: unterminated string literal (detected at line 9) (<string>, line 9)←[0m
|Error accessing the OpenAI API: This model's maximum context length is 4097 tokens. However, your messages resulted in 4137 tokens. Please reduce the length of the messages.

remriel avatar May 01 '23 04:05 remriel

What model are you working with and what does your .env look like? Try setting MAX_CONTEXT_SIZE, MAX_MEMORY_ITEM_SIZE, and SUMMARIZER_CHUNK_SIZE to lower values.

muellerberndt avatar May 01 '23 04:05 muellerberndt

OPENAI_API_KEY= MODEL="gpt-3.5-turbo" SUMMARIZER_MODEL="gpt-3.5-turbo" ENABLE_CRITIC=true MAX_CRITIQUES=2 PROMPT_USER=false

MAX_CONTEXT_SIZE=3800 MAX_MEMORY_ITEM_SIZE=2000 SUMMARIZER_CHUNK_SIZE=3000

CLEAR_DB_ON_START=true WORK_DIR="G:\miniagi" DEBUG=false

remriel avatar May 01 '23 04:05 remriel

Try the values for gpt-3.5-turbo:

MAX_CONTEXT_SIZE=2000
MAX_MEMORY_ITEM_SIZE=1000
SUMMARIZER_CHUNK_SIZE=1500

muellerberndt avatar May 01 '23 04:05 muellerberndt

Hey @remriel, did it help?

muellerberndt avatar May 17 '23 08:05 muellerberndt

Yep thanks

I've replaced ChatGPT browsing plugin since they limited it to GPT-4 making it unusuable, with the miniagi and it's working great

remriel avatar May 17 '23 22:05 remriel