Parse HTML via XPath

Question:

In .Net, I found this great library, HtmlAgilityPack that allows you to easily parse non-well-formed HTML using XPath. I’ve used this for a couple years in my .Net sites, but I’ve had to settle for more painful libraries for my Python, Ruby and other projects. Is anyone aware of similar libraries for other languages?

Asked By: Tristan Havelick

||

Answers:

BeautifulSoup is a good Python library for dealing with messy HTML in clean ways.

Answered By: Ned Batchelder

It seems the question could be more precisely stated as “How to convert HTML to XML so that XPath expressions can be evaluated against it“.

Here are two good tools:

  1. TagSoup, an open-source program, is a Java and SAX – based tool, developed by John Cowan. This is
    a SAX-compliant parser written in Java that, instead of parsing well-formed or valid XML, parses HTML as it is found in the wild: poor, nasty and brutish, though quite often far from short. TagSoup is designed for people who have to process this stuff using some semblance of a rational application design. By providing a SAX interface, it allows standard XML tools to be applied to even the worst HTML. TagSoup also includes a command-line processor that reads HTML files and can generate either clean HTML or well-formed XML that is a close approximation to XHTML.
    Taggle is a commercial C++ port of TagSoup.

  2. SgmlReader is a tool developed by Microsoft’s Chris Lovett.
    SgmlReader is an XmlReader API over any SGML document (including built in support for HTML). A command line utility is also provided which outputs the well formed XML result.
    Download the zip file including the standalone executable and the full source code: SgmlReader.zip

Answered By: Dimitre Novatchev

For Ruby, I highly recommend Hpricot that Jb Evain pointed out. If you’re looking for a faster libxml-based competitor, Nokogiri (see http://tenderlovemaking.com/2008/10/30/nokogiri-is-released/) is pretty good too (it supports both XPath and CSS searches like Hpricot but is faster). There’s a basic wiki and some benchmarks.

Answered By: Chu Yeow

There is a free C implementation for XML called libxml2 which has some api bits for XPath which I have used with great success which you can specify HTML as the document being loaded. This had worked for me for some less than perfect HTML documents..

For the most part, XPath is most useful when the inbound HTML is properly coded and can be read ‘like an xml document’. You may want to consider using a utility that is specific to this purpose for cleaning up HTML documents. Here is one example: http://tidy.sourceforge.net/

As far as these XPath tools go- you will likely find that most implementations are actually based on pre-existing C or C++ libraries such as libxml2.

Answered By: Klathzazt

In python, ElementTidy parses tag soup and produces an element tree, which allows querying using XPath:

>>> from elementtidy.TidyHTMLTreeBuilder import TidyHTMLTreeBuilder as TB
>>> tb = TB()
>>> tb.feed("<p>Hello world")
>>> e= tb.close()
>>> e.find(".//{http://www.w3.org/1999/xhtml}p")
<Element {http://www.w3.org/1999/xhtml}p at 264eb8>
Answered By: Aaron Maenpaa

I’m surprised there isn’t a single mention of lxml. It’s blazingly fast and will work in any environment that allows CPython libraries.

Here’s how you can parse HTML via XPATH using lxml.

>>> from lxml import etree
>>> doc = '<foo><bar></bar></foo>'
>>> tree = etree.HTML(doc)

>>> r = tree.xpath('/foo/bar')
>>> len(r)
1
>>> r[0].tag
'bar'

>>> r = tree.xpath('bar')
>>> r[0].tag
'bar'
Answered By: Jagtesh Chadha

The most stable results I’ve had have been using lxml.html’s soupparser. You’ll need to install python-lxml and python-beautifulsoup, then you can do the following:

from lxml.html.soupparser import fromstring
tree = fromstring('<mal form="ed"><html/>here!')
matches = tree.xpath("./mal[@form=ed]")
Answered By: Gareth Davidson
Categories: questions Tags: , , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.