gallery-dl
gallery-dl copied to clipboard
[Request] "ci-en" Request to add a site
https://ci-en.net/ It's a kind of sponsorship site that has images as well as images.
Would like to see support for this Site to
I too would like to see this. After changes made on Pixiv last winter some authors have left to Ci-En
Want to update this issue to say I'd love to see support for ci-en.net and ci-en.dlsite.com added to gallery-dl as well.
+1 , I don't think there's any downloaders for ci-en out there and it's a pain in the butt to download posts.
Just FYI, in the <head>
element of every article page is a <script type="application/ld+json">
which contains the entire article's contents in json format. Most of the relevant, downloadable, elements will be file-player-*
classes located within the articleBody
key.
Example article.
ci-en_dw.zip My python is not so good, but I really needed to save the posts of one of the authors, perhaps my code will make it easier to understand the extraction of content from cien.
ci-en_dw.zip My python is not so good, but I really needed to save the posts of one of the authors, perhaps my code will make it easier to understand the extraction of content from cien.
I don't have a jsessionid
in my cookies, hmm... oh well, I changed the cookies
to be able to import a cookies.txt
file and I appear to have backed up paid content successfully! Thank you!
ci-en_dw.zip My python is not so good, but I really needed to save the posts of one of the authors, perhaps my code will make it easier to understand the extraction of content from cien.
I don't have a
jsessionid
in my cookies, hmm... oh well, I changed thecookies
to be able to import acookies.txt
file and I appear to have backed up paid content successfully! Thank you!
There's no jsessionid in Firefox, only Chrome.
Firefox also has "jsessionid", but it has a different hostname 'login.dlsite.com' vs 'ci-en.net'.
There's no jsessionid in Firefox, only Chrome.
Ah, okay, I do see it now. The value was omitted from my "cookies.txt" file which I used with your script and I managed to successfully down paywalled content. Are you sure it's required?
I will experiment with manipulating my cookies.txt to find the minimum required variables.
I can confirm on my end that running the download script with just ci_en_session
manages to download exclusive works.
I won't be able to verify support for much longer for paid posts. Visa and Mastercard are temporarily (most likely indefinitely) suspended from use as a payment method on the site.
As someone new to python, how would I run this script? I have installed python 3, managed to add my ci_en_session, and change the download directory.
But when i click on the script it just shows cmd for a second then nothing happens.
I can confirm on my end that running the download script with just
ci_en_session
manages to download exclusive works.
I think you're right, during testing in postman I couldn't get a list of authors I'm subscribed to without this cookie, but now all ok.
As someone new to python, how would I run this script? I have installed python 3, managed to add my ci_en_session, and change the download directory.
But when i click on the script it just shows cmd for a second then nothing happens.
just copy code into any IDE(ex pycharm) and run it
I won't be able to verify support for much longer for paid posts. Visa and Mastercard are temporarily (most likely indefinitely) suspended from use as a payment method on the site.
with https://dl-pay.com/finish you can buy points for DLsite or Ci-en and then buy NSFW works. MC/VISA/PayPal...
I just did it and it worked just fine.
I've gotten it working, thanks.
It gets stuck and stops when encountering a zip download.
`Traceback (most recent call last):
File "D:\Downloads\ci-en_dw\ci-en_dw.py", line 213, in <module>
download_files(get_post_items(manual_creator_id, posts['post_id']), manual_creator_id, posts['post_id'])
File "D:\Downloads\ci-en_dw\ci-en_dw.py", line 72, in download_files
print(item['link'], filepath)
~~~~^^^^^^^^
KeyError: 'link'`
ci_en_session
I didn't problems downloading archives. If you want, you can try to debug the code manually or using GPT chat. or tell me your ci_en_session so that I can debug the code and fix the error You can contact me via messenger (Session) my id: 054db7c0429dba41aa914895603be91abcc37cc9534fa9128da188a37846f87738
!!!Please note if you send me ci_en_session, I will have access to the posts of the authors you are subscribed to, do this only if you trust me.
ci_en_session
I didn't problems downloading archives. If you want, you can try to debug the code manually or using GPT chat. or tell me your ci_en_session so that I can debug the code and fix the error You can contact me via messenger (Session) my id: 054db7c0429dba41aa914895603be91abcc37cc9534fa9128da188a37846f87738
!!!Please note if you send me ci_en_session, I will have access to the posts of the authors you are subscribed to, do this only if you trust me.
I forgot about GPT's coding abilities. its recommended this and it works like a charm
def download_files(post_data, creator_id, post_id):
base_folder = os.path.join(donwbload_folder, creator_id, post_id)
pictures_folder = os.path.join(base_folder, 'pictures')
videos_folder = os.path.join(base_folder, 'videos')
other_files_folder = os.path.join(base_folder, 'other_files')
description_folder = os.path.join(base_folder, 'description')
os.makedirs(description_folder, exist_ok=True)
with open(os.path.join(description_folder, 'description.json'), 'w', encoding='utf-8') as f:
json.dump(post_data, f, ensure_ascii=False)
for item in post_data['image_url']:
filename, ext = os.path.splitext(os.path.basename(urllib.parse.urlparse(item['link']).path))
filename_decoded = urllib.parse.unquote(filename)
filename_with_num = f"{item['num']}_{filename_decoded}{ext}"
filepath = os.path.join(pictures_folder, filename_with_num)
os.makedirs(pictures_folder, exist_ok=True)
print(item['link'], filepath)
download_file(item['link'], filepath)
for item in post_data['video_url']:
filename, ext = os.path.splitext(os.path.basename(urllib.parse.urlparse(item['link']).path))
filename_decoded = urllib.parse.unquote(filename)
filename_with_num = f"{item['num']}_{filename_decoded}{ext}"
filepath = os.path.join(videos_folder, filename_with_num)
os.makedirs(videos_folder, exist_ok=True)
print(item['link'], filepath)
download_file(item['link'], filepath)
if 'download_links' in post_data:
for item in post_data['download_links']:
if 'download_link' in item:
filename, ext = os.path.splitext(os.path.basename(urllib.parse.urlparse(item['download_link']).path))
filename_decoded = urllib.parse.unquote(filename)
filename_with_num = f"{item['num']}_{filename_decoded}{ext}"
filepath = os.path.join(other_files_folder, filename_with_num)
os.makedirs(other_files_folder, exist_ok=True)
print(item['download_link'], filepath)
download_file(item['download_link'], filepath)
ci-en_dw.zip My python is not so good, but I really needed to save the posts of one of the authors, perhaps my code will make it easier to understand the extraction of content from cien.
I've tried using this, but it seems to just create a description.json and not download any of the images. I tested it using a free post and it downloaded the images fine, but when I attempted to download from a creator I subscribed too it did not work.
ci-en_dw.zip My python is not so good, but I really needed to save the posts of one of the authors, perhaps my code will make it easier to understand the extraction of content from cien.
I've tried using this, but it seems to just create a description.json and not download any of the images. I tested it using a free post and it downloaded the images fine, but when I attempted to download from a creator I subscribed too it did not work.
Unfortunately, without researching the problematic post, I cannot identify the reason. Нou can contact me using the contacts above so that I can try to debug the code or you can use gpt chat for debugging.
Unfortunately, without researching the problematic post, I cannot identify the reason. Нou can contact me using the contacts above so that I can try to debug the code or you can use gpt chat for debugging.
https://ci-en.dlsite.com/creator/22374/article/1119644 It seems that the manga in the post cannot be downloaded.
What did you expect? It's still not supported.