agent-studio
agent-studio copied to clipboard
An open toolkit for building and benchmarking general virtual agents in the wild
Would love to see examples for using AgentStudio with some of the common models for autonomous web navigation ([Pix2Act](https://github.com/google-deepmind/pix2act), [MindAct](https://osu-nlp-group.github.io/Mind2Web/), [SeeAct](https://osu-nlp-group.github.io/SeeAct/)) and conversational webnav ([WebLINX models](https://huggingface.co/collections/McGill-NLP/weblinx-models-65c57d4afeeb282d1dcf8434) such as [SLLaMA-WL](https://huggingface.co/McGill-NLP/Sheared-LLaMA-2.7B-weblinx)).
Operate systems initially only provide text-only terminal interfaces, before GUI appears. Terminal is less resource intensive, more lightweight therefore can scale more easily than GUI. Besides, most LLMs are text-only....
tasks:data/grounding/linux/os/tasks.jsonl 2024-07-09 14:28:30,722 ERROR run.py:294 -- [Unhandled Error] ValueError('No prompt added')] 2024-07-09 14:28:30,723 ERROR run.py:295 -- Traceback (most recent call last): File "/home/zsf/codebase/agent-studio/run.py", line 217, in eval_headless agent.reset( File "/home/zsf/codebase/agent-studio/agent_studio/agent/direct_agent.py",...
where is template like tasks.jsonl? task_config_paths: dict = { # "desktop": "data/tasks/filesystem.jsonl", "desktop": "data/grounding/linux/os/tasks.jsonl", }