Gabriele Venturi
Gabriele Venturi
@avelino totally, but it's gonna be harder than expected. Couldn't find any naive approach that fixes all the use cases. @VictorGSoutoXP idea of running it on a isolate environment makes...
@Lorenzobattistela that's the idea. As of now I'm already kicking out imports, so it's more a matter of finding a list possible malicious codes and if detected in the code...
Yes @VictorGSoutoXP, I love your approach, I think we should try to integrate it this way. Do you want to work on this yourself?
This should fix some of the security problems: https://github.com/gventuri/pandas-ai/commit/e3d7d1dc259918565c0db08d535d8fd28fa7a465 I'll write down some other edge cases that we still need to cover. Feel free to comment about any potential issues...
@VictorGSoutoXP thanks a lot for the contribution and the knowledge sharing. @avelino I'll close the issue as the major concerns have been addressed. We'll improve further with this, as @TSampley...
Sounds great! I've assigned the issue to you. Let's make sure it doesn't break other LLMs!
As far as I can see, the current implementation seems to be quite solid, I'm closing it, but feel free to reopen if you make some improvements on this @dSupertramp
@Ink6220 sounds like a very important feature to have. We'll add it asap, thanks for suggesting!
Fixed by https://github.com/gventuri/pandas-ai/commit/48ee6758682a135e2f2ed117d3c1de9712dd2f7a, closing
Hey @prtolem, Can you try to instantiate Pandas with verbose=True? pandas_ai = PandasAI(llm, verbose=True) and attach the log?