Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump scrapy from 2.1.0 to 2.5.1 in /crawlerx_server #133

Closed

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Oct 6, 2021

Bumps scrapy from 2.1.0 to 2.5.1.

Release notes

Sourced from scrapy's releases.

2.5.1

Security bug fix:

If you use HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.

To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.

If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.

If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.

If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.

Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.

2.5.0

  • Official Python 3.9 support
  • Experimental HTTP/2 support
  • New get_retry_request() function to retry requests from spider callbacks
  • New headers_received signal that allows stopping downloads early
  • New Response.protocol attribute

See the full changelog

2.4.1

  • Fixed feed exports overwrite support

  • Fixed the asyncio event loop handling, which could make code hang

  • Fixed the IPv6-capable DNS resolver CachingHostnameResolver for download handlers that call reactor.resolve

  • Fixed the output of the genspider command showing placeholders instead of the import part of the generated spider module (issue 4874)

2.4.0

Hihglights:

  • Python 3.5 support has been dropped.

  • The file_path method of media pipelines can now access the source item.

    This allows you to set a download file path based on item data.

  • The new item_export_kwargs key of the FEEDS setting allows to define keyword parameters to pass to item exporter classes.

  • You can now choose whether feed exports overwrite or append to the output file.

    For example, when using the crawl or runspider commands, you can use the -O option instead of -o to overwrite the output file.

  • Zstd-compressed responses are now supported if zstandard is installed.

... (truncated)

Changelog

Sourced from scrapy's changelog.

Scrapy 2.5.1 (2021-10-05)

  • Security bug fix:

    If you use :class:~scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.

    To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.

    If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.

    If you need to send the same HTTP authentication credentials to multiple domains, you can use :func:w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.

    If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.

    Finally, if you are a user of scrapy-splash_, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.

.. _scrapy-splash: https://github.com/scrapy-plugins/scrapy-splash

.. _release-2.5.0:

Scrapy 2.5.0 (2021-04-06)

Highlights:

  • Official Python 3.9 support

  • Experimental :ref:HTTP/2 support <http2>

  • New :func:~scrapy.downloadermiddlewares.retry.get_retry_request function to retry requests from spider callbacks

... (truncated)

Commits
  • 61130c8 Bump version: 2.5.0 → 2.5.1
  • 98d2173 Pin the libxml2 version in CI as a newer one breaks lxml (#5208)
  • 47fb908 [CI] fail-fast: false (#5200)
  • 6d7179b tests: freeze pylint==2.7.4
  • d06dcb8 tests: force queuelib < 1.6.0
  • d99b1a1 Cover 2.5.1 in the release notes
  • c9485a5 Small documentation fixes.
  • a172844 Add http_auth_domain to HttpAuthMiddleware.
  • 5fd75f8 docs: require sphinx-rtd-theme>=0.5.2 and the latest pip to prevent installin...
  • e63188c Bump version: 2.4.1 → 2.5.0
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

Bumps [scrapy](https://github.com/scrapy/scrapy) from 2.1.0 to 2.5.1.
- [Release notes](https://github.com/scrapy/scrapy/releases)
- [Changelog](https://github.com/scrapy/scrapy/blob/master/docs/news.rst)
- [Commits](scrapy/scrapy@2.1.0...2.5.1)

---
updated-dependencies:
- dependency-name: scrapy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Oct 6, 2021
@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Mar 2, 2022

Superseded by #145.

@dependabot dependabot bot closed this Mar 2, 2022
@dependabot dependabot bot deleted the dependabot/pip/crawlerx_server/scrapy-2.5.1 branch March 2, 2022 14:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants