James Brown

Results 154 comments of James Brown

> Web pages are made to be plugged into Captchas are not something any code would be easily plugged into. Without the capability of solving arbitrary captchas, you are not...

This architecture designed by me is able to handle tasks like using visual text editors(`vim`, `nano`, `gedit`). Is there any non-visual model can do the same? I expect none.

For GPT-like models it is important to do pretraining, aka training autoregressively on large amount of data. Activity data like computer usage must be collected from human users, and random...

I am not repeating myself. You underestimate the difficulty of solving any CAPTCHA. I mean, for some "unseen" CAPTCHA, there's no way for either plugin or preset to solve it....

In the form of new model architectures, if you don't mind. More problems will be found. It is rooted in the model, not plugins. Closing this one won't help till...

For anyone saying anything against my CAPTCHA challenge, I would like to hear "challenge accepted" instead of "do it yourself" or "your approach doesn't work". It works in my mind,...

Logic Graph: ![cybergod_logic_graph](https://github.com/Significant-Gravitas/Auto-GPT/assets/103997068/3af2ba55-f249-41af-9515-6c8c6347956c) Roadmap: 1. Name my model as [Cybergod](https://github.com/james4ever0/agi_computer_control), my dataset as The Frozen Forest. 2. Design logos for my model and my dataset. 3. Create and upload my...

@abhiprojectz Watch [this video](https://vimeo.com/829663780?share=copy) to understand the difference. There's no way for you to do the same without heavy modification. https://github.com/Significant-Gravitas/Auto-GPT/assets/103997068/8e1cd6fe-c49d-4d2b-835d-0ffc9a5a458e For anyone interested in [this project](https://github.com/james4ever0/agi_computer_control) , please [join](https://discord.gg/eM5vezJvEQ)...

I think I was able to change the code in `LLMEngine` instead of `AsyncLLMEngine`, but since most concurrent issues happens on APIs and `AsyncLLMEngine` is used in APIs, this pull...

As for the time being tested, this modification done to vLLM version 0.3.2 has no effect to the issue in my environment. My modification is applied to the file `vllm/entrypoints/openai/serving_completion.py`....