scrapoxy
scrapoxy copied to clipboard
Not working in AWS or DO. Dead project it seems.
Hi everyone. Just wanted to let you know that I tested it on AWS and DO. I tried all available AMIs in the proper region and nothing clicked. No instance available error. This project doesn't seem to be alive anymore. Just a warning for the next person that comes along and potentially wastes many hours as I did.
Good luck!
Hi Ariel,
I managed to get it to work. I changed some settings in the security group in AWS. Now works as a charm.
Hi Ariel,
I managed to get it to work. I changed some settings in the security group in AWS. Now works as a charm.
Hey, that's awesome! Could you please share the settings? I also tried multiple security group settings.
Try to open all inbound and outbound ports in the security group and ensure that your conf.json has a SecurityGroups entry
"instance": {
"InstanceType": "t2.micro",
"ImageId": "ami-YOURSWILLDIFFERIFYOUHAVECOPIEDIT",
"SecurityGroups": [
"forward-proxy"
]
}
I am on eu-central, copied the image from eu-west.
Hi! Thanks for answering
Try to open all inbound and outbound ports in the security group and ensure that your conf.json has a SecurityGroups entry
"instance": { "InstanceType": "t2.micro", "ImageId": "ami-YOURSWILLDIFFERIFYOUHAVECOPIEDIT", "SecurityGroups": [ "forward-proxy" ] }
I am on eu-central, copied the image from eu-west.
Unfortunately it didn't work for me. I leave attached my conf.json
{
"commander": {
"password": "scrap"
},
"instance": {
"port": 3128,
"scaling": {
"min": 2,
"max": 3
}
},
"providers": [
{
"type": "awsec2",
"accessKeyId": ACCESS,
"secretAccessKey": SECRET,
"region": "eu-west-1",
"instance": {
"InstanceType": "t2.nano",
"ImageId": "ami-06220275",
"SecurityGroups": [
"forward-proxy"
]
}
}
]
}
I tried with all 3 ami's and their corresponding instances. I'm on eu-west-1. I opened all inbound and outbound ports. Name of security group is correct. I've waited more than 3 minutes, I've tweaked the amount of instances. I've tried with docker too.
Still with errors:
2020-03-15T14:34:57.460Z - debug: [Manager] checkInstances
2020-03-15T14:34:59.004Z - debug: [Manager] adjustInstances: required:2 / actual:2
2020-03-15T14:34:59.949Z - debug: [Instance/i-0e80574709e612602] checkAlive: false / -
2020-03-15T14:34:59.949Z - debug: [Pinger] ping: hostname=34.240.1.169 / port=3128
2020-03-15T14:34:59.957Z - debug: [Instance/i-01564e6cbf9cb1f01] checkAlive: false / -
2020-03-15T14:34:59.957Z - debug: [Pinger] ping: hostname=18.202.233.145 / port=3128
2020-03-15T14:35:04.953Z - debug: [Instance/i-0e80574709e612602] changeAlive: false => false
2020-03-15T14:35:04.961Z - debug: [Instance/i-01564e6cbf9cb1f01] changeAlive: false => false
2020-03-15T14:35:07.462Z - debug: [Manager] checkInstances
2020-03-15T14:35:09.895Z - debug: [Manager] adjustInstances: required:2 / actual:2
2020-03-15T14:35:17.462Z - debug: [Manager] checkInstances
2020-03-15T14:35:19.831Z - debug: [Manager] adjustInstances: required:2 / actual:2
2020-03-15T14:35:19.949Z - debug: [Instance/i-0e80574709e612602] checkAlive: false / -
2020-03-15T14:35:19.949Z - debug: [Pinger] ping: hostname=34.240.1.169 / port=3128
2020-03-15T14:35:19.957Z - debug: [Instance/i-01564e6cbf9cb1f01] checkAlive: false / -
2020-03-15T14:35:19.957Z - debug: [Pinger] ping: hostname=18.202.233.145 / port=3128
2020-03-15T14:35:24.952Z - debug: [Instance/i-0e80574709e612602] changeAlive: false => false
2020-03-15T14:35:24.959Z - debug: [Instance/i-01564e6cbf9cb1f01] changeAlive: false => false
2020-03-15T14:35:27.463Z - debug: [Manager] checkInstances
2020-03-15T14:35:30.018Z - debug: [Manager] adjustInstances: required:2 / actual:2
2020-03-15T14:35:37.465Z - debug: [Manager] checkInstances
2020-03-15T14:35:39.767Z - debug: [Manager] adjustInstances: required:2 / actual:2
2020-03-15T14:35:39.949Z - debug: [Instance/i-0e80574709e612602] checkAlive: false / -
2020-03-15T14:35:39.949Z - debug: [Pinger] ping: hostname=34.240.1.169 / port=3128
2020-03-15T14:35:39.957Z - debug: [Instance/i-01564e6cbf9cb1f01] checkAlive: false / -
2020-03-15T14:35:39.957Z - debug: [Pinger] ping: hostname=18.202.233.145 / port=3128
2020-03-15T14:35:44.953Z - debug: [Instance/i-0e80574709e612602] changeAlive: false => false
2020-03-15T14:35:44.958Z - debug: [Instance/i-01564e6cbf9cb1f01] changeAlive: false => false
2020-03-15T14:35:47.466Z - debug: [Manager] checkInstances
2020-03-15T14:35:49.936Z - debug: [Manager] adjustInstances: required:2 / actual:2
2020-03-15T14:35:57.466Z - debug: [Manager] checkInstances
2020-03-15T14:35:59.236Z - debug: [Manager] adjustInstances: required:2 / actual:2
2020-03-15T14:35:59.949Z - debug: [Instance/i-0e80574709e612602] checkAlive: false / -
2020-03-15T14:35:59.950Z - debug: [Pinger] ping: hostname=34.240.1.169 / port=3128
2020-03-15T14:35:59.957Z - debug: [Instance/i-01564e6cbf9cb1f01] checkAlive: false / -
2020-03-15T14:35:59.957Z - debug: [Pinger] ping: hostname=18.202.233.145 / port=3128
2020-03-15T14:36:04.953Z - debug: [Instance/i-0e80574709e612602] changeAlive: false => false
2020-03-15T14:36:04.961Z - debug: [Instance/i-01564e6cbf9cb1f01] changeAlive: false => false
2020-03-15T14:36:07.468Z - debug: [Manager] checkInstances
2020-03-15T14:36:09.196Z - debug: [Manager] adjustInstances: required:2 / actual:2
Error when testing (tried with 127.0.0.1 too)
~$ curl --proxy http://localhost:8888 http://api.ipify.org
~$ [Master] Error: No running instance found
Anything else you might've changed? Thank you!
Perhaps the t2.nano image has insufficient memory to boot up the proxy server? Try using a t2.micro image and see if that works. Please let me know if it doesnt!
Perhaps the t2.nano image has insufficient memory to boot up the proxy server? Try using a t2.micro image and see if that works. Please let me know if it doesnt!
I tried with other sizes too, some of the amis come in micro. That's the only one in nano. Didn't work either :(
Are you using docker or node to run Scrapoxy locally?
Are you using docker or node to run Scrapoxy locally?
I've tried with both too. None of them work. Are you using a particular version of the script?
Just the latest version from NPM...
Are you running the commands on a server or on your local machine?
Just the latest version from NPM...
Are you running the commands on a server or on your local machine?
On my local machine. Are you using it from an EC2 instance or something?
Works on a local machine and on a fresh ubuntu 18.04 image hosted at digitalocean
I still think your security settings are wrong and thats why the requests arent coming through. Can you take two screenshots, one of your inbound and one of your outbound rules?
Works on a local machine and on a fresh ubuntu 18.04 image hosted at digitalocean
I still think your security settings are wrong and thats why the requests arent coming through. Can you take two screenshots, one of your inbound and one of your outbound rules?
Same OS here.
Here is my security group:
Try this... I'm using the same image as you with these settings and it works.
Try this... I'm using the same image as you with these settings and it works.
Hey, thanks a lot. Didn't work either. I may try cloning the image later.
Tried running scrapoxy on a different machine?
Tried running scrapoxy on a different machine?
Did that too. I don't know, really. I may try in the future copying the image and moving it. That'll be when I need it again. I super appreciate the help and ideas here. Stay safe.
just tried a new setup in eu-central-1 and after copying the recommended ami it seems to work fine, I'm running it from a local docker machine currently.
question for experts: when I shut down the docker instance the servers are terminating. any way to keep them alive and make them reconnect for the next run?
asking because I'm envisioning the scrapoxy pod going down for some reason, bringing down the whole architecture? any way to prevent that scenario?
I was facing following issue and I solved it by opening all inbound and outbound ports as mentioned by @drmanhatin . Not sure if this is helpful
2020-06-10T06:51:05.038Z - error: [Master] Error: request error from target (GET http://api.ipify.org/ on instance i-<instance-id>@<ip>:3128): message=socket hang up, stack=Error: socket
hang up
at createHangUpError (_http_client.js:323:15)
at Socket.socketOnEnd (_http_client.js:426:23)
at Socket.emit (events.js:203:15)
at Socket.EventEmitter.emit (domain.js:448:20)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19), code=ECONNRESET
I was facing following issue and I solved it by opening all inbound and outbound ports as mentioned by @drmanhatin . Not sure if this is helpful
2020-06-10T06:51:05.038Z - error: [Master] Error: request error from target (GET http://api.ipify.org/ on instance i-<instance-id>@<ip>:3128): message=socket hang up, stack=Error: socket hang up at createHangUpError (_http_client.js:323:15) at Socket.socketOnEnd (_http_client.js:426:23) at Socket.emit (events.js:203:15) at Socket.EventEmitter.emit (domain.js:448:20) at endReadableNT (_stream_readable.js:1145:12) at process._tickCallback (internal/process/next_tick.js:63:19), code=ECONNRESET
Don't keep them open like this, or only allow your servers to live for a VERY short time. One of my proxies was abused by hackers after I accidently let it turned on overnight.
corrected in Scrapoxy 4.0.0
Hey there! π Exciting news! Scrapoxy 4 is ready to rock π. Check it out at Scrapoxy.io (explore the "get started" guide, deployment documentation, and more π). I can't wait to hear your feedback on this new version! Send me your coolest screenshots with as many proxies as possible! πΈπ» Join the Discord community if you have any questions or just want to chat. You can also open a GitHub issue for any bug or feature request πβ¨. See you soon! π Fabien