gospider
gospider copied to clipboard
Gospider - Fast web spider written in Go
It seems that proxies are not honored, by looking at Wireshark traffic I see some requests not going through any proxy. I think this is related to https://github.com/gocolly/colly/issues/392 We probably...
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.0.0-20210614182718-04defd469f4e to 0.7.0. Commits See full diff in compare view [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter...
Add a rate limit option to control the spidering speed, e.g `-rl`, `-rate-limit`. You can use https://github.com/projectdiscovery/ratelimit for this.
When passing the gospider to a list of active hosts, there are hosts where it enters an infinite loop and is only in 1 days, apparently bringing the same thing,...
solve #71 Documentation query update the problem solved
commend: gospider -q -s "https://google.com/" Error: unknown shorthand flag: 'q' in -q When i use this commend in kali linux ( gospider -q -s "https://google.com/" ) it's give me the...
goroutine 8912103 [select]: net/http.(*persistConn).writeLoop(0xc01713e900) C:/Program Files/Go/src/net/http/transport.go:2444 +0xf0 created by net/http.(*Transport).dialConn in goroutine 8912067 C:/Program Files/Go/src/net/http/transport.go:1800 +0x1585 goroutine 8911685 [select]: net/http.(*persistConn).writeLoop(0xc019db7560) C:/Program Files/Go/src/net/http/transport.go:2444 +0xf0 created by net/http.(*Transport).dialConn in goroutine 8911605 C:/Program...
gospider "https://domain.com" --json when using the --json flag I am getting a mix of of json output with non-json output. there is a URL entry at the top in JSON,...
我想爬取大概2000多个url,结果我需要写一个获取没爬取的url的文件,因为gospider经常被kill ``` import os domains_ok = [] for filename in os.listdir('gs_output'): domain_ok = filename.replace('_', '.') domains_ok.append(domain_ok) domains_not_ok = set() with open('sub_alive.txt', 'r') as f: urls = f.read().split() for url in...
Is other sources free? If not which key should i add? This is the basic function for other sources, but i can't find it.