rengine
rengine copied to clipboard
Feature - Alternative to AXIOM
Found this a while back, maybe a good alternative to axiom framework:
https://github.com/tr3ss/ShadowClone
ShadowClone allows you to distribute your long running tasks dynamically across thousands of serverless functions and gives you the results within seconds where it would have taken hours to complete. You can make full use of the Free Tiers provided by cloud providers and supercharge your mundane cli tools with shadow clone jutsu (Naruto style)!
| Features | Axiom/Fleex | ShadowClone |
|---|---|---|
| Instances | 10-100s* | 1000s |
| Cost | Per instance/per minute | Mostly Free** |
| Startup Time | 4-5 minutes | 2-3 seconds |
| Max Execution Time | Unlimited | 15 minutes |
| Idle Cost | $++ | Free |
| On Demand Scalability | No | ∞ |
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
Describe alternatives you've considered
Additional context
👋 Hi @d4op, Issues is only for reporting a bug/feature request. Please read documentation before raising an issue https://rengine.wiki For very limited support, questions, and discussions, please join reNgine Discord channel: https://discord.gg/azv6fzhNCE Please include all the requested and relevant information when opening a bug report. Improper reports will be closed without any response.
That's an outdated fork. https://github.com/fyoorer/ShadowClone is up-to-date and the original one.
How is this feature going with Axiom / ShadowClone
How is this feature going with Axiom / ShadowClone
Not began yet. Maybe someone could work on it. I don't have planned to do it in near future
@psyray @AnonymousWP I was wondering if rengine is able to scale if you configure two linux instances with the same remote postgres. Do you see any reason why this would not work? Of course you need to sync changes to filesystem - like custom wordlists and screenshots - but apart from this limitation? I am happy to do some tests if you see no essential reasons against this idea.
Cheers
@psyray @AnonymousWP I was wondering if rengine is able to scale if you configure two linux instances with the same remote postgres. Do you see any reason why this would not work? Of course you need to sync changes to filesystem - like custom wordlists and screenshots - but apart from this limitation? I am happy to do some tests if you see no essential reasons against this idea.
Cheers
reNgine use Celery, which is a distributed task queue. So I think you should look at this first. https://docs.celeryq.dev/en/stable/
You can setup only celery container on many Linux and distribute from there. You only need to share the db.
I don't know how you could do this with reNgine, this need tests.
I have created a test lab and ran some pretty promising tests. I want to share my findings with you.
What I have set up:
base server (10.0.0.3):
- installed postgres
- installed redis
- installed nfs share (exposing
/srv/shared)
2 rengine worker nodes:
- modified
.envdatabase section and point to shared postgres on base server - modified docker-compose.yml and replaced
redis://rediswithredis://<baseserver-IP>andvolumesto use nfs (except for postgres_data - modified
web/celery-entrypoint.shand adapted all celery commands at the bottom. e.g.-n initiate_scan_workerto-n initiate_scan_worker_1. this solved log errors likeDuplicateNodenameWarning: Received multiple replies from node names: celery@remove_duplicate_endpoints_worker - mounted nfs server 10.0.0.3 under
/srv/shared
issues:
- not able to log in to both webUIs at the same time - even with different users. Error in log "django.security.SuspiciousSession | Session data corrupted" - but its okay to login to only one of the two worker nodes
I tested the scan engine "Full scan" and everything seems to work fine:
- screenshots are displayed in webUI of both worker nodes
- tasks are distributed to both nodes
appendix:
changes to .env
POSTGRES_DB=rengine
POSTGRES_USER=rengine
POSTGRES_PASSWORD=xxxxxxxxxx
POSTGRES_PORT=5432
POSTGRES_HOST=10.0.0.3
changes to docker-compose.yml
version: '3.8'
services:
celery:
environment:
- CELERY_BROKER=redis://10.0.0.3:6379/0
- CELERY_BACKEND=redis://10.0.0.3:6379/0
celery-beat:
environment:
- CELERY_BROKER=redis://10.0.0.3:6379/0
- CELERY_BACKEND=redis://10.0.0.3:6379/0
web:
environment:
- CELERY_BROKER=redis://10.0.0.3:6379/0
- CELERY_BACKEND=redis://10.0.0.3:6379/0
volumes:
tool_config:
driver: local
driver_opts:
type: nfs
o: addr=10.0.0.3,rw
device: ":/srv/shared/tool_config"
postgres_data:
gf_patterns:
driver: local
driver_opts:
type: nfs
o: addr=10.0.0.3,rw
device: ":/srv/shared/gf_patterns"
nuclei_templates:
driver: local
driver_opts:
type: nfs
o: addr=10.0.0.3,rw
device: ":/srv/shared/nuclei_templates"
github_repos:
driver: local
driver_opts:
type: nfs
o: addr=10.0.0.3,rw
device: ":/srv/shared/github_repos"
wordlist:
driver: local
driver_opts:
type: nfs
o: addr=10.0.0.3,rw
device: ":/srv/shared/wordlist"
scan_results:
driver: local
driver_opts:
type: nfs
o: addr=10.0.0.3,rw
device: ":/srv/shared/scan_results"
static_volume:
driver: local
driver_opts:
type: nfs
o: addr=10.0.0.3,rw
device: ":/srv/shared/static_volume"
git diff
git diff web/celery-entrypoint.sh
diff --git a/web/celery-entrypoint.sh b/web/celery-entrypoint.sh
index f1d49ff1..b4f71c9e 100755
--- a/web/celery-entrypoint.sh
+++ b/web/celery-entrypoint.sh
@@ -164,24 +164,24 @@ echo 'alias httpx="/go/bin/httpx"' >> ~/.bashrc
echo "Starting Workers..."
echo "Starting Main Scan Worker with Concurrency: $MAX_CONCURRENCY,$MIN_CONCURRENCY"
-watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/reNgine/" -- celery -A reNgine.tasks worker --pool=gevent --concurrency=30 --loglevel=info -Q initiate_scan_queue -n initiate_scan_worker &
-watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/reNgine/" -- celery -A reNgine.tasks worker --pool=gevent --concurrency=30 --loglevel=info -Q subscan_queue -n subscan_worker &
....
+watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/reNgine/" -- celery -A reNgine.tasks worker --pool=gevent --concurrency=30 --loglevel=info -Q initiate_scan_queue -n initiate_scan_worker_1 &
+watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/reNgine/" -- celery -A reNgine.tasks worker --pool=gevent --concurrency=30 --loglevel=info -Q subscan_queue -n subscan_worker_1 &
....
🤩 Great !!! I can't understand why you have 2 webUI ?