nhentai icon indicating copy to clipboard operation
nhentai copied to clipboard

Support for the new Anubis authenticator

Open edgar1016 opened this issue 5 months ago • 22 comments

I tried using the program today and received the typical "check_cookie: Blocked by Cloudflare captcha, please set your cookie and useragent" I updated the cookie and found I was still getting the error.

I cleared the site data and relogged from the site to generate a new cookie. I saw a new screen for a split second when opening the site one I haven't seen before so I looked at the cookie and found that they now are using this new Anubis authenticator alongside Cloudflare. According to the repo, it's supposed to be a "nuclear" option to prevent AI scraper bots I'm not sure how to go about getting around this.

edgar1016 avatar Aug 06 '25 01:08 edgar1016

Anubis was added maybe a couple weeks ago and people were encountering problems with it in #issue 409. I found that adding the Anubis cookies were sufficient previously, unless something has changed I don't think this problem is Anubis. Though I am also experiencing Blocked by Cloudflare issues.

Vsolon avatar Aug 06 '25 03:08 Vsolon

Anubis was added maybe a couple weeks ago and people were encountering problems with it in #issue 409. I found that adding the Anubis cookies were sufficient previously, unless something has changed I don't think this problem is Anubis. Though I am also experiencing Blocked by Cloudflare issues.

The way I've been adding my cookies using the GUI I never noticed it until now when I got the error but, it would also automatically add them if they were added prior. Hopefully this can be sorted out, I know a lot of sites have been implementing heavier anti-scraping measures since the AI boom with all these companies hoarding data.

edgar1016 avatar Aug 06 '25 06:08 edgar1016

Ok well I'm glad this isn't just me. I copy the entire cookie set out of the header in chrome and used that and still shows blocked. So Something has changed. This was working a couple of days ago and stopped working in the middle of a download. my cookie data had not changed either. But I reset all the cookies and reimported them with the useragent and still blocked by cloudflare even through Chrome can still access the site fine.

billsargent avatar Aug 06 '25 12:08 billsargent

@billsargent @edgar1016 If you'd like till this gets patched you can try

nhentai/utils


from curl_cffi import requests 

def request(method, url, **kwargs):
    session = requests.Session(impersonate="chrome110")
    session.headers.update(get_headers())

    if not kwargs.get('proxies', None):
        kwargs['proxies'] = {
            'https': constant.CONFIG['proxy'],
            'http': constant.CONFIG['proxy'],
        }

    return getattr(session, method)(url, verify=False, **kwargs)

Just pip install curl_cffi

You can just do pip install --no-cache-dir . to build it and it should run fine

I haven't particularly tested this, but it works fine for now while we wait for Ricter or someone else to make a better fix.

It works for me at least so thats all I care about lol.

Vsolon avatar Aug 06 '25 15:08 Vsolon

Just pip install curl_cffi

You can just do pip install --no-cache-dir . to build it and it should run fine

I haven't particularly tested this, but it works fine for now while we wait for Ricter or someone else to make a better fix.

It works for me at least so thats all I care about lol.

Does this work even if I have installed nhentai with pip as well and not from the git repo?

billsargent avatar Aug 06 '25 16:08 billsargent

Just pip install curl_cffi You can just do pip install --no-cache-dir . to build it and it should run fine I haven't particularly tested this, but it works fine for now while we wait for Ricter or someone else to make a better fix. It works for me at least so thats all I care about lol.

Does this work even if I have installed nhentai with pip as well and not from the git repo?

I'd run it in a venv if ya dont wanna uninstall it

Vsolon avatar Aug 06 '25 16:08 Vsolon

I am experiencing the same issue around the same time as this issue was posted. Anubis was already implemented even before that yes? It could be that it was configured differently at that time which resulted in the tool not working anymore.

benedrill avatar Aug 06 '25 16:08 benedrill

Does this work even if I have installed nhentai with pip as well and not from the git repo?

I'd run it in a venv if ya dont wanna uninstall it

I just rebuilt it all. This worked for me. I patched the utils.py and its running now just fine. Thanks for the hotfix :)

billsargent avatar Aug 06 '25 17:08 billsargent

diff --git a/pyproject.toml b/pyproject.toml
index original..modified 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -10,7 +10,8 @@ include = ["nhentai/viewer/**"]


 [tool.poetry.dependencies]
-python = "^3.8"
+python = "^3.9"
+curl-cffi = "^0.13.0"
 requests = "^2.32.3"
 soupsieve = "^2.6"
 beautifulsoup4 = "^4.12.3"
diff --git a/nhentai/utils.py b/nhentai/utils.py
index a627537..44ad764 100644
--- a/nhentai/utils.py
+++ b/nhentai/utils.py
@@ -11,6 +11,7 @@ import requests
 import sqlite3
 import urllib.parse
 from typing import Tuple
+from curl_cffi import requests as cffi_requests

 from nhentai import constant
 from nhentai.constant import PATH_SEPARATOR
@@ -36,7 +37,7 @@ def get_headers():
     return headers

 def request(method, url, **kwargs):
-    session = requests.Session()
+    session = cffi_requests.Session(impersonate="chrome110")
     session.headers.update(get_headers())

     if not kwargs.get('proxies', None):


A diff patch if anyone needs it.

billsargent avatar Aug 07 '25 18:08 billsargent

diff --git a/pyproject.toml b/pyproject.toml
index original..modified 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -10,7 +10,8 @@ include = ["nhentai/viewer/**"]


 [tool.poetry.dependencies]
-python = "^3.8"
+python = "^3.9"
+curl-cffi = "^0.13.0"
 requests = "^2.32.3"
 soupsieve = "^2.6"
 beautifulsoup4 = "^4.12.3"
diff --git a/nhentai/utils.py b/nhentai/utils.py
index a627537..44ad764 100644
--- a/nhentai/utils.py
+++ b/nhentai/utils.py
@@ -11,6 +11,7 @@ import requests
 import sqlite3
 import urllib.parse
 from typing import Tuple
+from curl_cffi import requests as cffi_requests

 from nhentai import constant
 from nhentai.constant import PATH_SEPARATOR
@@ -36,7 +37,7 @@ def get_headers():
     return headers

 def request(method, url, **kwargs):
-    session = requests.Session()
+    session = cffi_requests.Session(impersonate="chrome110")
     session.headers.update(get_headers())

     if not kwargs.get('proxies', None):

A diff patch if anyone needs it.

I'm not sure how to interpret this in combination with the "patch" mentioned earlier, which I'm assuming this was derived from, in a way that works for me. I tried pasting multiple parts of this into my utils.py, none of which worked. I'm using it locally to download my favorites from time to time using windows 10, not linux or an venv, if that makes a difference. Could you give either a full excerpt of the part that you changed which worked, at least for you in hopes that it's independent from system setup. A screenshot/snip of the part you changed or just the text would be enough for me, a full file would work too, if you don't mind sharing it somehow.

DeadlyShadow71 avatar Aug 09 '25 08:08 DeadlyShadow71

I'm not sure how to interpret this in combination with the "patch" mentioned earlier, which I'm assuming this was derived from, in a way that works for me. I tried pasting multiple parts of this into my utils.py, none of which worked. I'm using it locally to download my favorites from time to time using windows 10, not linux or an venv, if that makes a difference. Could you give either a full excerpt of the part that you changed which worked, at least for you in hopes that it's independent from system setup. A screenshot/snip of the part you changed or just the text would be enough for me, a full file would work too, if you don't mind sharing it somehow.

restore your utils.py back to normal. At the top where the imports are, add

from curl_cffi import requests

The find the section starting with

def request(method, url, **kwargs):

Modify that entire thing to look like this:

def request(method, url, **kwargs):
    session = requests.Session(impersonate="chrome110")
    session.headers.update(get_headers())

    if not kwargs.get('proxies', None):
        kwargs['proxies'] = {
            'https': constant.CONFIG['proxy'],
            'http': constant.CONFIG['proxy'],
        }

    return getattr(session, method)(url, verify=False, **kwargs)

billsargent avatar Aug 09 '25 08:08 billsargent

I'm not sure how to interpret this in combination with the "patch" mentioned earlier, which I'm assuming this was derived from, in a way that works for me. I tried pasting multiple parts of this into my utils.py, none of which worked. I'm using it locally to download my favorites from time to time using windows 10, not linux or an venv, if that makes a difference. Could you give either a full excerpt of the part that you changed which worked, at least for you in hopes that it's independent from system setup. A screenshot/snip of the part you changed or just the text would be enough for me, a full file would work too, if you don't mind sharing it somehow.

restore your utils.py back to normal. At the top where the imports are, add

from curl_cffi import requests

The find the section starting with

def request(method, url, **kwargs):

Modify that entire thing to look like this:

def request(method, url, **kwargs):
    session = requests.Session(impersonate="chrome110")
    session.headers.update(get_headers())

    if not kwargs.get('proxies', None):
        kwargs['proxies'] = {
            'https': constant.CONFIG['proxy'],
            'http': constant.CONFIG['proxy'],
        }

    return getattr(session, method)(url, verify=False, **kwargs)

About the same steps I took once and it didn't work. I get this message when trying to run it with the edited utils.py:

Image

DeadlyShadow71 avatar Aug 09 '25 08:08 DeadlyShadow71

Did you install curl_cffi?

billsargent avatar Aug 09 '25 09:08 billsargent

Did you install curl_cffi?

Did that when I tested it initially, turned out that I had it installed already.

DeadlyShadow71 avatar Aug 09 '25 09:08 DeadlyShadow71

Did you install curl_cffi?

Did that when I tested it initially, turned out that I had it installed already.

You're not running the latest nhentai? Did you get it from this repo?

This is my version:

banner: nHentai ver 0.6.0-beta: あなたも変態。 いいね?

billsargent avatar Aug 09 '25 09:08 billsargent

Did you install curl_cffi?

Did that when I tested it initially, turned out that I had it installed already.

You're not running the latest nhentai? Did you get it from this repo?

This is my version:

banner: nHentai ver 0.6.0-beta: あなたも変態。 いいね?

Installing it via pip gives me ver 0.5.25, not sure how to force the version that you seem to have using pip since this repo shows 0.5.24 as latest but the files show 0.5.25. I can see that some files say 0.6.0 but that's not getting installed with pip, at least on my end. I could try a reinstall of the package with pip to see if that works?

DeadlyShadow71 avatar Aug 09 '25 10:08 DeadlyShadow71

Did you install curl_cffi?

Did that when I tested it initially, turned out that I had it installed already.

You're not running the latest nhentai? Did you get it from this repo? This is my version: banner: nHentai ver 0.6.0-beta: あなたも変態。 いいね?

Installing it via pip gives me ver 0.5.25, not sure how to force the version that you seem to have using pip since this repo shows 0.5.24 as latest but the files show 0.5.25. I can see that some files say 0.6.0 but that's not getting installed with pip, at least on my end. I could try a reinstall of the package with pip to see if that works?

I used git and cloned the repo and got 0.6.0-beta. I then built it from that.

use pip to remove the one you have.... git clone this repo... modify the utils.py then run

pip install --no-cache-dir .

from inside the root folder of nhentai and it should install it on your system.

billsargent avatar Aug 09 '25 10:08 billsargent

Did you install curl_cffi?

Did that when I tested it initially, turned out that I had it installed already.

You're not running the latest nhentai? Did you get it from this repo? This is my version: banner: nHentai ver 0.6.0-beta: あなたも変態。 いいね?

Installing it via pip gives me ver 0.5.25, not sure how to force the version that you seem to have using pip since this repo shows 0.5.24 as latest but the files show 0.5.25. I can see that some files say 0.6.0 but that's not getting installed with pip, at least on my end. I could try a reinstall of the package with pip to see if that works?

I used git and cloned the repo and got 0.6.0-beta. I then built it from that.

use pip to remove the one you have.... git clone this repo... modify the utils.py then run

pip install --no-cache-dir .

from inside the root folder of nhentai and it should install it on your system.

Perfect, works now. Although I did pip install --no-cache-dir . before I modified the utils.py, still works and I now have 0.6.0-beta, too. Thanks for the help. I'd assume that, apart from implementing either this fix or another one, this issue can be closed now?

DeadlyShadow71 avatar Aug 09 '25 10:08 DeadlyShadow71

Sadly the changes nhentai have made have the destroyed the ability to scrape while also using a VPN, it triggers cloudflare every hourish meaning you need a new token, if you dont use a vpn (or use a VPN endpoint that isn't well known) it doesn't seem to trigger ever.

weirdly if you just use the cookie that was getting cloudflare blocked while using a VPN but without a VPN it just works, dont need a new cookie, very sad.

HerptyDerpoty avatar Aug 20 '25 15:08 HerptyDerpoty

diff --git a/pyproject.toml b/pyproject.toml
index original..modified 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -10,7 +10,8 @@ include = ["nhentai/viewer/**"]


 [tool.poetry.dependencies]
-python = "^3.8"
+python = "^3.9"
+curl-cffi = "^0.13.0"
 requests = "^2.32.3"
 soupsieve = "^2.6"
 beautifulsoup4 = "^4.12.3"
diff --git a/nhentai/utils.py b/nhentai/utils.py
index a627537..44ad764 100644
--- a/nhentai/utils.py
+++ b/nhentai/utils.py
@@ -11,6 +11,7 @@ import requests
 import sqlite3
 import urllib.parse
 from typing import Tuple
+from curl_cffi import requests as cffi_requests

 from nhentai import constant
 from nhentai.constant import PATH_SEPARATOR
@@ -36,7 +37,7 @@ def get_headers():
     return headers

 def request(method, url, **kwargs):
-    session = requests.Session()
+    session = cffi_requests.Session(impersonate="chrome110")
     session.headers.update(get_headers())

     if not kwargs.get('proxies', None):

A diff patch if anyone needs it.

This helped me, thank you <3

drafthard65 avatar Aug 30 '25 18:08 drafthard65

我不确定如何结合前面提到的“补丁”(我假设这是从中衍生出来的)来解释这一点,这样对我来说可行。我尝试将这段代码的多个部分粘贴到我的 utils.py 文件中,但都不起作用。我偶尔会用它在本地下载我的收藏夹,使用的是 Windows 10,而不是 Linux 或 Venv,如果这两者之间有什么区别的话。您能否提供您修改的、起作用的部分的完整摘录,至少对您来说是这样,希望它与系统设置无关。您修改部分的屏幕截图/片段,或者仅仅是文本对我来说就足够了,如果您不介意以某种方式分享,完整的文件也可以。

恢复 utils.py 的正常状态。在导入文件的顶部,添加 from curl_cffi import requests 查找以...开头的部分 def request(method, url, **kwargs): 将整个内容修改为如下所示:

def request(method, url, **kwargs):
    session = requests.Session(impersonate="chrome110")
    session.headers.update(get_headers())

    if not kwargs.get('proxies', None):
        kwargs['proxies'] = {
            'https': constant.CONFIG['proxy'],
            'http': constant.CONFIG['proxy'],
        }

    return getattr(session, method)(url, verify=False, **kwargs)

我之前的步骤大致相同,但没用。当我尝试使用编辑后的 ​​utils.py 运行它时,收到以下消息:

图像

添加这个: def get_headers(): """返回请求头""" return { 'Referer': constant.LOGIN_URL, 'User-Agent': constant.CONFIG['useragent'], 'Cookie': constant.CONFIG['cookie'], }

cancan23333 avatar Sep 25 '25 09:09 cancan23333

Sadly im still running into "blocked by cloudflare" after changing the code mentioned above, i installed curl_cffi and run v0.6.0 anyone got any ideas?

ArexGit avatar Oct 14 '25 08:10 ArexGit