llama_index
llama_index copied to clipboard
SecGPT - LlamaIndex Integration
Description
SecGPT is an LLM-based system that secures the execution of LLM apps via isolation. The key idea behind SecGPT is to isolate the execution of apps and to allow interaction between apps and the system only through well-defined interfaces with user permission. SecGPT can defend against multiple types of attacks, including app compromise, data stealing, inadvertent data exposure, and uncontrolled system alteration. We develop SecGPT using LlamaIndex because it supports several LLMs and apps and can be easily extended to include additional LLMs and apps. We implement SecGPT as a personal assistant chatbot, which the users can communicate with using text messages.
New Package?
Did I fill in the tool.llamahub
section in the pyproject.toml
and provide a detailed README.md for my new integration or package?
- [ ] Yes
- [x ] No
Version Bump?
Did I bump the version in the pyproject.toml
file of the package I am updating? (Except for the llama-index-core
package)
- [ ] Yes
- [x ] No
Type of Change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [ ] Added new unit/integration tests
- [x ] Added new notebook (that tests end-to-end)
- [ ] I stared at the code and made sure it makes sense
Suggested Checklist:
- [x ] I have performed a self-review of my own code
- [x ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] I have added Google Colab support for the newly added notebooks.
- [x ] My changes generate no new warnings
- [x ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] I ran
make format; make lint
to appease the lint gods
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
@Yuhao-W submitted a PR to your fork/main branch. It brings in the necessary pants build files to pass our checks.
@Yuhao-W looks like lint/fmt checks are failing. Can you please run:
make lint
and make format
then commit and push?
@Yuhao-W looks like lint/fmt checks are failing. Can you please run:
make lint
andmake format
then commit and push?
@nerdai Thanks, Andrei. I just fixed this.
@nerdai Hi, Andrei. I see that some checks failed. Is there anything that needs to be changed?
@nerdai Hi, Andrei. I see that some checks failed. Is there anything that needs to be changed?
Hey @Yuhao-W sorry for the troubles. I took a look at the logs and couldn't find anything. Tagging @logan-markewich who is quite good at figuring out this stuff when it seems like all is lost. lol
I think I figured out the errors and updated the package by mainly including dependency information in the pyproject.toml file under our package path. I also set up a unit test environment and ran it locally. The unit test was passed on my end. @nerdai and/or @logan-markewich, would appreciate if you can review the changes!
I think I figured out the errors and updated the package by mainly including dependency information in the pyproject.toml file under our package path. I also set up a unit test environment and ran it locally. The unit test was passed on my end. @nerdai and/or @logan-markewich, would appreciate if you can review the changes!
thanks, lets run the checks and see what happens!
I think I figured out the errors and updated the package by mainly including dependency information in the pyproject.toml file under our package path. I also set up a unit test environment and ran it locally. The unit test was passed on my end. @nerdai and/or @logan-markewich, would appreciate if you can review the changes!
thanks, lets run the checks and see what happens!
Thanks @andrei, it failed again : ( this time because the requirements.txt
was named requirements.tx
I have made a new commit. Not sure but we may still see errors after, would appreciate a deeper look if it fails. Thank you!
@logan-markewich we're still running into some errors here. Perhaps we need to add a dependency in pants? This is the error we're seeing in the tests:
llama-index-packs/llama-index-packs-secgpt/llama_index/packs/secgpt/sandbox.py:6: in <module>
import tldextract
E ModuleNotFoundError: No module named 'tldextract'
But tldextract
is indeed included in the pyproject.toml as a dep for the project.
(CC @Yuhao-W)
@Yuhao-W got checks to pass 🥳. Needed to remove the requirements.txt file as it was tripping up pants having deps listed in both requirements.txt and pyproject.toml
@nerdai Thanks for the feedback on the PR! I will address your comments and get back to you with an update soon.
hey @Yuhao-W: how's this coming along?
hey @Yuhao-W: how's this coming along?
Hi @nerdai . Thank you for checking with me. I’m currently working on it and will be able to make a push over the weekend.
@Yuhao-W Thanks for this contribution! I'm really excited about this :)
I left some comments on your PR. As another blanket comment, I do think your pack would greatly improve if you were able to include some doc/class strings throughout your code (i.e., quick descriptions of funcs/classes and its params/args).
Hi @nerdai . Thanks for your suggestions. I have addressed all your comments and included doc/class strings for all classes and functions.