e621dl
e621dl copied to clipboard
Program crashes immediately after opening
https://puu.sh/FoqAk/a00cc0a471.png
Keeps giving that error. Using the newest release.
issue adressed in #38
the exe is not the latest release, need to use the source, unfortunately, as Wulfre can't make new exe versions any more.
The python version doesn't work either. After installing Python and clicking on the .py file, it opens a command prompt and instantly closes before any text appears.
I went through all of #38 and installed the various things, but it still just keeps giving an error about some "yaml" thing.
Same thing happens to me. Closes instantly.
for windows users, in order to diagnose the problem, you really should run the .py under a batch file with these commands
@echo off
cd H:\E621\e621dl-3.1.1
py -3 e621dl.py
@echo off
pause
you can change the "cd H:\E621\e621dl-3.1.1" to where your ed621dl folder is located. (I'm just to lazy to rename my folder to the actual version number.)
this will allow you to see what the error is to provide further support, as "closes instantly" literally informs nothing, other than there was an error, and you did not run a bat file to pause to read what happened.
Well he posted a screenshot of what he had, and this is what I have.
you have an error in line 43, column 1 of your config.yaml file. Open it with notepad++ for ease of use, and find the error. looks like you did a \t instead of hitting the tab key.
C:\Python38\python.exe: can't find 'main' module in 'e621dl'
New error.
I run the program itself without using powershell, and it creates the downloads folder but doesn't get any files. It doesn't give an error either, I don't think. Or at least it doesn't show one for a brief second like it used to. I'll try running that bat thing.
When using command prompt it gives this error:
py -3 e621dl.py
[i] Running e621dl version 5.0.0.
[i] Getting config...
[i] Getting posts for search 'Latias'.
Traceback (most recent call last):
File "e621dl.py", line 53, in
That's my mistake. I didn't put in a catch for the user not having something in the blacklist.
I thought the blacklist was gotten from putting in your account info?
Putting in your account info should filter the posts before they even get into the python script. They're two separate implementations of the blacklist. Nothing gets copied from your account into the script. I left the local blacklist as an option for people without accounts.
@darkspyro92 just pushed a fix that should work when the blacklist is empty. Thanks for finding that for me, and sorry for the trouble that you had.
Just like old times, eh? When I used to stress test your program with just about every form of unreasonable thing I could think of. xD
Got another error after a while.
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "C:\Python38\lib\http\client.py", line 1322, in getresponse
response.begin()
File "C:\Python38\lib\http\client.py", line 303, in begin
version, status, reason = self._read_status()
File "C:\Python38\lib\http\client.py", line 264, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "C:\Python38\lib\socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "C:\Python38\lib\ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "C:\Python38\lib\ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 428, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 335, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='e621.net', port=443): Read timed out. (read timeout=5.0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\httpx\_utils.py", line 364, in as_network_error
yield
File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
conn = self.pool.urlopen(
File "C:\Python38\lib\site-packages\urllib3\poolmanager.py", line 336, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 724, in urlopen
retries = retries.increment(
File "C:\Python38\lib\site-packages\urllib3\util\retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=rape+date%3A%3E%3D0001-01-01+id%3A%3C2147905 (Caused by ReadTimeoutError("HTTPSConnectionPool(host='e621.net', port=443): Read timed out. (read timeout=5.0)"))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\umbre\Pictures\Private\e621dl-master\e621dl.py", line 42, in <module>
posts = remote.get_posts(client, ' '.join(search['tags']), search['start_date'], last_id)
File "C:\Users\umbre\Pictures\Private\e621dl-master\e621dl\remote.py", line 5, in get_posts
response = client.get(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 706, in get
return self.request(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 570, in request
return self.send(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 590, in send
response = self.send_handling_redirects(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 617, in send_handling_redirects
response = self.send_handling_auth(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 654, in send_handling_auth
response = self.send_single_request(request, timeout)
File "C:\Python38\lib\site-packages\httpx\_client.py", line 678, in send_single_request
response = dispatcher.send(request, timeout=timeout)
File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
conn = self.pool.urlopen(
File "C:\Python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Python38\lib\site-packages\httpx\_utils.py", line 368, in as_network_error
raise NetworkError(exc) from exc
httpx._exceptions.NetworkError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=rape+date%3A%3E%3D0001-01-01+id%3A%3C2147905 (Caused by ReadTimeoutError("HTTPSConnectionPool(host='e621.net', port=443): Read timed out. (read timeout=5.0)"))
That's because the connection wasn't established after 5 seconds and I didn't include a retry. I'll see about including functionality for it to retry the connection if it fails the first time, and also to give an easier to read error.
The blacklist fix seems to have broken blacklists completely. Neither the account BL nor the config BL are filtering my search. That's my test config:
auth:
username: *********
api_key: **************************
# Note that if you included your auth above, then your account blacklist will already be applied.
blacklist:
-friendship_is_magic
search_defaults:
days: 62
min_score: -1
min_fav_count: 0
allowed_ratings:
- q
- e
searches:
Equine:
days: 1
tags:
- Equine
Double checked login and API key and the filter works on the website.
It works for me. Not sure what it is with yours. Try putting a space between the hyphen and the tag on your blacklist.
Then again, my search didn't say there were any blacklisted tags. When I get home from work I'll try a test as well.
Ok, that fixed the config BL, but my account BL still does nothing.
I'll have to do testing. As far as I know once you provide the API key then searches are automatically filtered with your blacklist, as in the posts that conflict with your blacklist wouldn't even get returned by the API in the first place. Something might have changed with the site update.
Okay, I read the documentation incorrectly. The blacklist is client-side, even on the official website. Using your API key does not automatically apply the blacklist.
Is there an internal blacklist of things we can't search? Or is it limited to so many searches? I tried to search two artists and it skipped right over them. It said searching for posts for the names I input in the tags, but didn't download anything.
that can happen when there are no posts within the given date range.
I gave a date range of 9999999999999. That worked before on the other searches. Or rather, I did change something. Maybe I'm doing something wrong. I had my default set to that before so I didn't have to add the date range to all the searches. Now it's set to 2 and I have the days set on the new individual searches.
search_defaults:
days: 2
min_score: 0
min_fav_count: 0
allowed_ratings:
- e
Youjomodoki:
days: 99999999999999
tags:
-youjomodoki
I just ran a blank config with this and it worked fine.
oh wait
I see the problem. you need a space in the line that says
Youjomodoki: days: 99999999999999 tags: -youjomodoki
should be
Youjomodoki: days: 99999999999999 tags: - youjomodoki
it's not looking for a tag, because of that missing space
You know, I'm usually good about putting those spaces. xD