BilibiliRankListSpider
BilibiliRankListSpider copied to clipboard
Bump scrapy from 1.5.0 to 1.8.2
Bumps scrapy from 1.5.0 to 1.8.2.
Release notes
Sourced from scrapy's releases.
1.8.2
Security bug fixes
When a
Request
object with cookies defined gets a redirect response causing a newRequest
object to be scheduled, the cookies defined in the originalRequest
object are no longer copied into the newRequest
object.If you manually set the
Cookie
header on aRequest
object and the domain name of the redirect URL is not an exact match for the domain of the URL of the originalRequest
object, yourCookie
header is now dropped from the newRequest
object.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.
Note: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.com
and any subdomain) by defining the shared domain suffix (e.g.example.com
) as the cookie domain when defining your cookies. See the documentation of theRequest
class for more information.When the domain of a cookie, either received in the
Set-Cookie
header of a response or defined in aRequest
object, is set to apublic suffix <https://publicsuffix.org/>
_, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.
1.8.1
Security bug fix:
If you use
HttpAuthMiddleware
(i.e. thehttp_user
andhttp_pass
spider attributes) for HTTP authentication, any request exposes your credentials to the request target.To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute,
http_auth_domain
, and point it to the specific domain to which the authentication credentials must be sent.If the
http_auth_domain
spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.If you need to send the same HTTP authentication credentials to multiple domains, you can use
w3lib.http.basic_auth_header
instead to set the value of theAuthorization
header of your requests.If you really want your spider to send the same HTTP authentication credentials to any domain, set the
http_auth_domain
spider attribute toNone
.Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.
1.7.4
Revert the fix for #3804 (#3819), which has a few undesired side effects (#3897, #3976).
1.7.3
Enforce lxml 4.3.5 or lower for Python 3.4 (#3912, #3918)
1.7.2
Fix Python 2 support (#3889, #3893, #3896)
1.7.0
Highlights:
- Improvements for crawls targeting multiple domains
- A cleaner way to pass arguments to callbacks
- A new class for JSON requests
- Improvements for rule-based spiders
- New features for feed exports
... (truncated)
Changelog
Sourced from scrapy's changelog.
Scrapy 1.8.2 (2022-03-01)
Security bug fixes:
When a :class:
~scrapy.http.Request
object with cookies defined gets a redirect response causing a new :class:~scrapy.http.Request
object to be scheduled, the cookies defined in the original :class:~scrapy.http.Request
object are no longer copied into the new :class:~scrapy.http.Request
object.If you manually set the
Cookie
header on a :class:~scrapy.http.Request
object and the domain name of the redirect URL is not an exact match for the domain of the URL of the original :class:~scrapy.http.Request
object, yourCookie
header is now dropped from the new :class:~scrapy.http.Request
object.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the
cjvr-mfj7-j4j8 security advisory
_ for more information... _cjvr-mfj7-j4j8 security advisory: https://github.com/scrapy/scrapy/security/advisories/GHSA-cjvr-mfj7-j4j8
.. note:: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.com
and any subdomain) by defining the shared domain suffix (e.g.example.com
) as the cookie domain when defining your cookies. See the documentation of the :class:~scrapy.http.Request
class for more information.When the domain of a cookie, either received in the
Set-Cookie
header of a response or defined in a :class:~scrapy.http.Request
object, is set to apublic suffix <https://publicsuffix.org/>
_, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies into your requests to some other domains. Please, see the
mfjm-vh54-3f96 security advisory
_ for more information... _mfjm-vh54-3f96 security advisory: https://github.com/scrapy/scrapy/security/advisories/GHSA-mfjm-vh54-3f96
.. _release-1.8.1:
Scrapy 1.8.1 (2021-10-05)
Security bug fix:
If you use
... (truncated)
Commits
ae41acb
Bump version: 1.8.1 → 1.8.2d2589c7
test_unbounded_response: to_unicode → custom six codeefd72b0
tests: unicode → to_unicodec5c2e2c
tests: fix cast (str → unicode)ed2348d
CI: Install mock in Python 3.5 with pinned dependencies919f52f
CI: Install mock in Python 3.5610ce9b
Fix typo6c55f76
Stop mixing keyword arguments and ** in function callsc4b22a5
Remove further Python 3 syntaxc0e745e
Remove Python 3 syntax- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebase
will rebase this PR -
@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it -
@dependabot merge
will merge this PR after your CI passes on it -
@dependabot squash and merge
will squash and merge this PR after your CI passes on it -
@dependabot cancel merge
will cancel a previously requested merge and block automerging -
@dependabot reopen
will reopen this PR if it is closed -
@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) -
@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language -
@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language -
@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language -
@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the Security Alerts page.