rengine
rengine copied to clipboard
feat: GPT4All, Open-source large language models that run locally on your CPU and nearly any GPU
Is there an existing feature or issue for this?
- [X] I have searched the existing issues
Expected feature
Hi.
Can you add support for GPT4All?
It is great for local testing (3GB - 8GB) in any corporation that don't want to send data to internet.
"GPT4All is an ecosystem to run powerful and customized large language models that work locally on consumer grade CPUs and any GPU. Note that your CPU needs to support [AVX or AVX2 instructions]"
https://gpt4all.io/ https://github.com/nomic-ai/gpt4all
Alternative solutions
No response
Anything else?
No response
👋 Hi @zinwelzl, Issues is only for reporting a bug/feature request. Please read documentation before raising an issue https://rengine.wiki For very limited support, questions, and discussions, please join reNgine Discord channel: https://discord.gg/azv6fzhNCE Please include all the requested and relevant information when opening a bug report. Improper reports will be closed without any response.
Woah I was unaware of this @zinwelzl :heart:
I will keep this in priority.
Also if you are aware, are there any models that we can train our reNgine data on? does gpt4all or anyone else let me do that?
@zinwelzl
I think it can, if not this model there is others. I'll let you know what I find.
is it possible to train with new material? https://github.com/nomic-ai/gpt4all/issues/198
@yogeshojha ,I am excited to offer my contributions to this issue. I have a couple of ideas that I believe could significantly enhance the project's functionality, particularly in the gpt.py script.
Proposed Enhancements:
-
Fallback to Local Model: In scenarios where an OpenAI key is not defined or available, I propose implementing a fallback mechanism in
gpt.py
that automatically switches to a local model. This approach ensures continuity of service while catering to users without access to OpenAI's API. -
Integration of GPT4All: I plan to utilize the GPT4All Python bindings as the local model. My focus will be on seamlessly integrating this without disrupting the current usage patterns of the GPT API. The goal is to maintain backward compatibility and ease of use.
-
Testing with Mistral Instruct Model: For the
GPTVulnerabilityReportGenerator
, I have conducted tests using the Mistral Instruct model. These tests are to ensure that our implementations are robust and can handle various language modeling tasks efficiently.
Approach and Compatibility:
- My approach will prioritize maintaining the existing functionality of
gpt.py
while introducing these enhancements. - I aim to ensure that the integration is smooth and does not introduce breaking changes for current users.
Request for Feedback:
- I am eager to hear any feedback or suggestions regarding this proposal.
- If there are specific guidelines or preferences for how these features should be integrated, I would appreciate your insights.
I am looking forward to contributing to this important feature, and I believe that these enhancements will make the tool more versatile and accessible to a broader range of users.
- I have also tested Mistral Instruct model for GPTVulnerabilityReporGenerator:
Question:
You are a highly skilled penetration tester who has recently completed a penetration testing.
You will be given with a
- Vulnerability title
- Vulnerable URL
- and some description about the vulnerability.
Your job is to write a detailed technical penetration testing report based on the given Vulnerability details.
The purpose of this report is to provide an in-depth analysis of the vulnerabilities discovered during the penetration testing engagement.
The penetration testing report must contain all separated by \n\n
- Vulnerability description
Include a detailed vulnerability description, include any known CVE IDs, any known existing vulnerabilities.
- Impact
Include what this vulnerability can impact for web applications.
- Remediation
Include steps to remediate this vulnerability. Separate each new remediations by - and a new line \n
- References
Include any references URL about this vulnerability, any existing CVE ID, or news articles etc. Separate each new references by - and a new line \n. Only include http urls
Do not write 'Penetration Testing Report:' on the title.
CVE-2021-44228
www.index.gov/
Response:
Vulnerability Title: Remote Code Execution (RCE) Vulnerability in Apache Tomcat Server
Vulnerable URL: www.index.gov.tr
Description: The vulnerability was discovered during a penetration testing engagement on the website of the Ministry of Customs and Trade of Turkey, which is hosted on an Apache Tomcat server. The RCE vulnerability allows attackers to execute arbitrary code on the server by exploiting a flaw in the JVM (Java Virtual Machine) used by Tomcat.
Impact: This vulnerability can be exploited by attackers to gain full control of the web application and potentially compromise sensitive data or perform malicious actions such as DDoS attacks, data theft, etc. The impact of this vulnerability on a web application can be severe, as it allows an attacker to execute arbitrary code on the server, which can lead to unauthorized access, data theft, and other forms of exploitation.
Remediation: To remediate this vulnerability, the following steps should be taken:
1. Update Apache Tomcat to the latest version that includes a fix for CVE-2021-44228.
2. Configure Tomcat to use a sandboxing mechanism such as Java Web Application Firewall (JWAF) or Java Security Manager to restrict the execution of arbitrary code on the server.
3. Implement input validation and sanitization techniques to prevent malicious inputs from being executed by the web application.
4. Conduct regular vulnerability assessments and penetration testing to identify and remediate any new vulnerabilities that may arise in the future.
References:
* CVE-2021-44228 - <https://cve.mitre.org/cve/2021-44228
* Apache Tomcat Security Best Practices - <https://tomcat.apache.org/security-best-practices.html>
Sounds good! Could you file a PR or contribute in some way regarding the code and GPT4All?
PS: your post also looks/sounds like it has been written by ChatGPT. :p
@AnonymousWP I will do in asap. Currently I am thinking about change from gpt4all to ollama runs on docker. Rengine (inside in gpt.py) can call with http request. Problem I am facing is we need to come up solution that supports many other gpts not only gpt4all.
@ErdemOzgen if you need to discuss with me, I am available for a quick meet
@AnonymousWP I will do in asap. Currently I am thinking about change from gpt4all to ollama runs on docker. Rengine (inside in gpt.py) can call with http request. Problem I am facing is we need to come up solution that supports many other gpts not only gpt4all.
Or gpt4free. :) Is free (including GPT-4) and offers a Dockerised installation.
@yogeshojha I have just sent you email for schedule interview. @AnonymousWP Definitely will check out.
Added Ollama integration, pending thorough testing. Preliminary work can be viewed at rengine.
Added Ollama integration, pending thorough testing. Preliminary work can be viewed at rengine.
Hi, thanks for this. Could you create a branch in your repository for this modification, and create a pull request in this repository from your branch https://docs.github.com/en/get-started/quickstart/contributing-to-projects#creating-a-branch-to-work-on
Added Ollama integration, pending thorough testing. Preliminary work can be viewed at rengine.
Hi, thanks for this. Could you create a branch in your repository for this modification, and create a pull request in this repository from your branch https://docs.github.com/en/get-started/quickstart/contributing-to-projects#creating-a-branch-to-work-on
Done you can check PR #1096
how about adding a feature where instead of using the openai api or gpt4all there would be an option to change the openai base api. this would be helpful for users who are hosting their own model with e.g. https://github.com/oobabooga/text-generation-webui and for users who are living in countries where openai is restricted
added the pr #1114
Done you can check PR #1096
Thanks but your PR is on your master branch of your repository.
Next time, when you submit a PR, best is to create a branch named for ex: add-llama2-llm
that start from you master branch.
@psyray It was my bad. If you want i make adjustment create PR again ?
@psyray It was my bad. If you want i make adjustment create PR again ?
If you can it's better. Because if your PR takes some times to be completed, and another PRs are merged, you will need to rebase your branch onto master to get latest matser changes and resolve conflicts in your branch If you're using master you could not do it.
So :
- first, save your work locally, to not lose your changes,
- sync your github branch with this repository (you will lose your changes),
- create your branch from your master branch
- Reimplement your changes
@psyray I have just added. Could you check it please ?
@psyray I have just added. Could you check it please ?
It's good 👍