quantumalchemy

Results 16 comments of quantumalchemy

anyone find a solution to this? firefox : Security Error: Content at http://localhost/stl.html may not load data from https://cdn.jsdelivr.net/npm/@stlite/[email protected]/build/static/js/6073.ade8ee62.chunk.js. only works in chrome and file served from online server or...

Hi the best place to start is the main / links to docs https://memgpt.ai/ This project has a great discord https://discord.gg/9GEQrxmVyE

actually just tested with crewai using 'gpt-4-turbo (OpenAI ) and worked ok just a little slow but ok use the max_iter=4, max_rpm=5 (max api calls per min)

update - tried on Docker with node:10.16.0-alpine on aws download chrome but errors.. -> Downloading precompiled headless Chromium binary (stable channel) for AWS Lambda. Completed Headless Chromium download. npm WARN...

yeah Im trying to do this in docker tried on local network and on a AWS EC2 and getting : Downloading precompiled headless Chromium binary (stable channel) for AWS Lambda....

re: [voidxd] (thanks Man!) yep just to verify !python ingest.py -- ok (took over 2.5 hours !python privateGPT.py crapped out after prompt -- output --> llama.cpp: loading model from models/ggml-model-q4_0.bin...

Yeah always want to see what you can get away with for free with this bleeding edge stuff.. so we all 'need bigger boats' To legit run on consumer grade...

Yeah I ran the default model_path_llama=WizardLM-7B-uncensored.ggmlv3.q8_0.bin No getting answer.. will try again Thanks

The ui .. I will try on a gpu tomorrow thanks

ok using cli (chat) worked ok using ggml-gpt4all-j-v1.3-groovy.bin but not llama WizardLM-7B-uncensored.ggmlv3.q8_0.bin tried --> (q4) Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin in chat only and that worked but wont work in GUI when I created...