Add more information about safety of running OpenDevin to the README
OpenDevin is an agent and I am super worried whether it is safe to run OpenDevin or not.
For example, it can steal sensitive information, password etc.
How do you guarantee it is safe to run it in a local machine?
Don't send sensitive information to it and you won't have cause to worry. The same goes for ChatGPT, Gemini, or any other model.
You can use a local llama server
I am not worrying about ChatGPT, Gemini. These are established company.
I am more about about the OpenDevin code itself.
well you can check code by your self... I am checking it to extend this or help with the project... seems to be clean... nothing weird for now... :smile:
I am not worrying about ChatGPT, Gemini. These are established company.
I am more about about the OpenDevin code itself.
Please don't post sensitive information to any model. There is absolutely no good reason to do so.
@code2graph there is most likely a risk since the model is capable of creating and executing its own code. It all really depends if you point the model in a direction that is a problem. For example if you gave it the workspace of sys32 that would be a really bad idea. It is most likely safe to run if you don't give it files like @kjenney said and you don't try to make it do something malicious. With LLM's there is always a risk that they hallucinate something and go off the rails so if your computer has really important information you should look into using a VM or something similar to mitigate risks.
We should add more information about the safety measures that are in place with OpenDevin (docker sandbox, etc.) and why those are still not perfect, to the README.
I am not worrying about ChatGPT, Gemini. These are established company.
Yes, but you actually don't know what they do.
I am more about about the OpenDevin code itself. For example, it can steal sensitive information, password etc.
All code are opensource, so you don't need to worry it will steal sensitive information on purpose. You can chcek the code logic.
there is most likely a risk since the model is capable of creating and executing its own code
Agree with that, and that is one of the reason why we run it in a sandbox/docker. Still more improvement and welcome any suggestions.