parsel icon indicating copy to clipboard operation
parsel copied to clipboard

Support "html5" type to use html5lib parser

Open redapple opened this issue 8 years ago • 43 comments
trafficstars

Every now and then we get a bug report about some HTML source not being parsed as a browser would.

There was the idea in Scrapy of adding an "html5" type to switch to an HTML5 compliant parser. One of these is html5lib that can be used with lxml.

redapple avatar May 09 '17 10:05 redapple

I'll work on this :muscle:

joaquingx avatar Jan 11 '19 20:01 joaquingx

Is there any update on this? In my project, I can scrape items with beautifulsoup that fail with scrapy. In my case, it happens in 1/20 pages in my scrapy project. I hate to waste page data--if I don't have to :) Or, is there an elegant workaround?

Example Code

import requests
from lxml import etree, html
from bs4 import BeautifulSoup
url = 'https://www.homeadvisor.com/rated.CoventryAdditions.60530954.html'
r = requests.get(url)

# Author Fails
from parsel import Selector
sel = parsel.Selector(text = r.text)
print('Page Title is {}'.format(sel.xpath("//title//text()").get())) # Success
print(sel.xpath('//span[contains(@itemprop,"author")]//text()').get()) # []
print(sel.css('span[itemprop="author"]').get()) []

# Author Works
soup = BeautifulSoup(r.content, 'lxml') # also works with html5lib
print('title is: {}'.format(soup.title.text)) #Success
for author in soup.findAll("span", {"itemprop":"author"}):
    print(author.text) # Success

grahamanderson avatar May 07 '19 18:05 grahamanderson

@grahamanderson You can try and review the pull request at https://github.com/scrapy/parsel/pull/133

Alternatively, you can use the following workaround in a downloader middleware or in the callbacks of your spider:

from bs4 import BeautifulSoup

# …

response = response.replace(body=str(BeautifulSoup(response.body, "html5lib")))

Gallaecio avatar May 08 '19 07:05 Gallaecio

Thank you @Gallaecio ! I used the scrapy-beautifulsoup code...as middleware Strangely, I did not have to resort to using html5lib. BeautifulSoup's lxml parser seems a bit more robust than Scrapy/Parsel's LXML parser.

class BeautifulSoupMiddleware(object):
    def __init__(self, crawler):
        super(BeautifulSoupMiddleware, self).__init__()

        self.parser = crawler.settings.get('BEAUTIFULSOUP_PARSER', "html.parser")

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_response(self, request, response, spider):
        """Overridden process_response would "pipe" response.body through BeautifulSoup."""
        return response.replace(body=str(BeautifulSoup(response.body, self.parser)))

grahamanderson avatar May 08 '19 17:05 grahamanderson

From @whalebot-helmsman:

There is html 5 parser implementation for lxml (https://lxml.de/api/lxml.html.html5parser-pysrc.html )

Gallaecio avatar Feb 04 '20 14:02 Gallaecio

Hi all, I am too late to start for GSoC 2020, found this issue interesting and having good knowledge in web-dev with python and javascript. Can someone help me with how to get started?

aryamanpuri avatar Mar 16 '20 13:03 aryamanpuri

Start by reading up http://gsoc2020.scrapinghub.com/participate and the links at the top to Python and Google resources. Mind that student applications have just started and will close in a couple of weeks.

Gallaecio avatar Mar 16 '20 19:03 Gallaecio

So should I start contributing to the project or start making a good proposal ?

aryamanpuri avatar Mar 17 '20 06:03 aryamanpuri

You can start with whichever you prefer, but you need to do both before the deadline, proposals from student that have not submitted any patch will not be considered.

If you start with your proposal, and you can manage to isolate a small part of the proposal that you can implement in a week or less, you could implement that as your contribution, and that would speak high of your ability to complete the rest of the project.

Gallaecio avatar Mar 17 '20 09:03 Gallaecio

Parsel can extract data from the Html and XML but due to some exceptions in the Html like the use of # in the attributes of tag and having the different technique to visualize the tags in HTML, there is need of html5lib parser. Do I get it right? Anything more that can help me?

aryamanpuri avatar Mar 17 '20 09:03 aryamanpuri

Make sure you have a look at the issues linked from this thread.

Another benefit of supporting a parser like html5lib, for example, is that the HTML tree that it builds in memory is closer to what you see in a web browser when you use the Inspect feature.

Gallaecio avatar Mar 17 '20 10:03 Gallaecio

There is html 5 parser implementation for lxml (https://lxml.de/api/lxml.html.html5parser-pysrc.html)

In my tests it looked quite slow (e.g. 130 ms to parse an html which took lxml.html only 9 ms), while html5-parser looks fast (only 7 ms for the same html) and returns lxml tree as well: https://html5-parser.readthedocs.io/en/latest/

EDIT: although there is a problem that html5-parser returns lxml.etree._Element while lxml.html returns lxml.html.HtmlElement which have slightly different API.

lopuhin avatar Sep 22 '20 13:09 lopuhin

can I work on it for GSoC

99bcsagar avatar Mar 10 '21 17:03 99bcsagar

can I work on it for GSoC

That would be great. Please have a look at https://gsoc2021.zyte.com/participate for details.

Gallaecio avatar Mar 10 '21 20:03 Gallaecio

sir,is it a continuation of previous contributions or should i do it completely new.

99bcsagar avatar Mar 12 '21 11:03 99bcsagar

There has been a previous attempt with feedback, https://github.com/scrapy/parsel/pull/133, which could serve as a starting point or inform an alternative approach. Other than that, this would need to be done from scratch, yes.

Gallaecio avatar Mar 12 '21 11:03 Gallaecio

Hello I am mew here should I work on this project? There are not many new issues listed here.

ashishsun avatar Mar 17 '21 03:03 ashishsun

Hello I am mew here should I work on this project? There are not many new issues listed here.

Do you mean as a Google Summer of Code student candidate?

Gallaecio avatar Mar 17 '21 09:03 Gallaecio

Hello, my name is Garry Putranto Arimurti, a GSoC candidate. I am interested in contributing to this project and I would like to learn more about the issue so I can work on it. Is there any specific issue I can work on and improve here? Thanks!

garput2 avatar Apr 11 '21 10:04 garput2

@garput2 It’s hard to provide feedback without specific questions, but I guess https://github.com/scrapy/parsel/pull/153 is a somehow related pull request that gives a view of what would probably be a good first step towards supporting an HTML5 parser.

On the other hand, to participate in GSoC with us you need a pre-application pull request, in addition to presenting a proposal. Since today is the last day to present a proposal, your timing is a little tight.

Gallaecio avatar Apr 13 '21 08:04 Gallaecio

Create Selector for html5:

from lxml.html.html5parser import document_fromstring

def selector_from_html5(response):
  root = document_fromstring(response.text)
  selector = Selector(response, type='html', root=root)
  return selector

tonal avatar Aug 11 '21 04:08 tonal

I think recent work done by @whalebot-helmsman on https://github.com/kovidgoyal/html5-parser/ is relevant here - now it's possible to use a fast and compliant html5 parser (using a variant of the gumbo parser) and get an lxml.html tree as a result with treebuilder='lxml_html'

lopuhin avatar Aug 11 '21 10:08 lopuhin

Yes, it is possible. There is one thing which makes widespread adoption of html5-parser. You need to install lxml from sources.

whalebot-helmsman avatar Aug 12 '21 12:08 whalebot-helmsman

  response.replace(body=str(BeautifulSoup(response.body, self.parser)))

You can get a charset error using this, if the original page was not utf-8 encoded, because the response has set to other encoding. So, you must first change the encoding.

In addition, there may be a problem of character escaping. For example, if the character < is encountered in the text of html, then it must be escaped as &lt;. Otherwise, "lxml" will delete it and the text near it, considering it an erroneous html tag. "html5lib" escapes characters, but is slow.

r = response.replace(encoding='utf-8', body=str(BeautifulSoup(response.body, 'html5lib')))

"html.parser" is faster, but from_encoding must also be specified (to example 'cp1251').

r = response.replace(encoding='utf-8', body=str(BeautifulSoup (response.body, 'html.parser', from_encoding='cp1251')))

vladiscripts avatar Oct 07 '21 14:10 vladiscripts

Yes, it is possible. There is one thing which makes widespread adoption of html5-parser. You need to install lxml from sources.

Another option is selectolax. The only issue would be a possible (idk if this is an actual issue) legal problem: rushter/selectolax#18.

averms avatar Oct 09 '21 00:10 averms

I believe there is no legal issue.

That said, Parsel heavily relies on lxml, whereas https://github.com/rushter/selectolax seems to go a different route, offering much better performance according to them. So I think integrating selectolax into Parsel while keeping the Parsel API and behavior would be rather hard, compared to something like https://github.com/scrapy/parsel/issues/83#issuecomment-896705459.

On the other hand, if the upstream benchmark results are to be trusted (~7 times faster than lxml), in the long term it may be worth looking into replacing, or at least allowing to replace, the Parsel lxml backend with one based on selectolax. But that should probably be logged as a different issue. Maybe a good idea for a Google Summer of Code project.

Gallaecio avatar Oct 09 '21 06:10 Gallaecio

Seems like selectolax does not offer support for XPath selectors and supports only CSS selectors, if lxml backend were to be replaced with selectolax should XPath selectors be supported by converting XPath to CSS? This can be done by adding support for conversion int cssselect, I found a quick workaround by using this library cssify.

deepakdinesh1123 avatar Feb 23 '22 10:02 deepakdinesh1123

should XPath selectors be supported by converting XPath to CSS?

I would not go that route because while all CSS Selectors expressions can be expressed as XPath 1.0, it does not work the other way around. I think supporting CSS Selectors expressions only would be OK in this case.

Gallaecio avatar Feb 23 '22 18:02 Gallaecio

I think supporting CSS Selectors expressions only would be OK in this case.

So, should the existing backend be preserved for supporting xpath along with new parser for css ? or should another parser which supports xpath be added?

deepakdinesh1123 avatar Feb 27 '22 13:02 deepakdinesh1123

I am just thinking out loud here, I have no strong opinions, but my guess is that, from the user perspective, you would choose a parser (or pass an instance of it) when creating a Selector, and for this alternative parser calls to xpath and related methods would raise NotImplementedError.

Gallaecio avatar Feb 28 '22 15:02 Gallaecio