scrapy-mysql-pipeline
scrapy-mysql-pipeline copied to clipboard
scrapy mysql pipeline
Using your pipeline, it's very helpful! Wondering if there is a simple way to route to separate database tables?
Introduce [itemadapter](https://github.com/scrapy/itemadapter) into the pipeline to make the pipeline compatible with new item types [`pydantic`objects](https://github.com/scrapy/itemadapter#pydantic-objects).
process_item should "return", not "yield". Otherwise, None is written to the log. see: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
My project uses conda for package management, and it is [not advised](https://www.anaconda.com/blog/using-pip-in-a-conda-environment) to mix conda and pip together in the same environment. Therefore it would be useful to deploy a...
If the setting `MYSQL_TABLE` is not defined, then the table-name is now generated from spider name. Alternatively, `MYSQL_TABLE` can optionally contain a `'%s'` printf wildcard, which is replaced by the...
If I missed this anywhere my apologies, but it seems SSL connection aren't supported. Any chance this can be added/configured? Thanks!
self.db.runInteraction(self._process_item, item) -- sometimes won't process, sometimes process twice for one item.
Item is not being returned from process_item properly and subsequent pipelines can not seem to process it
http://www.dougalmatthews.com/2016/Sep/01/automate-publishing-to-pypi-with-pbr-and-travis/