How to parse malformed HTML in python, using standard libraries

Question:

There are so many html and xml libraries built into python, that it’s hard to believe there’s no support for real-world HTML parsing.

I’ve found plenty of great third-party libraries for this task, but this question is about the python standard library.

Requirements:

  • Use only Python standard library components (any 2.x version)
  • DOM support
  • Handle HTML entities ( )
  • Handle partial documents (like: Hello, <i>World</i>!)

Bonus points:

  • XPATH support
  • Handle unclosed/malformed tags. (<big>does anyone here know <html ???

Here’s my 90% solution, as requested. This works for the limited set of HTML I’ve tried, but as everyone can plainly see, this isn’t exactly robust. Since I did this by staring at the docs for 15 minutes and one line of code, I thought I would be able to consult the stackoverflow community for a similar but better solution…

from xml.etree.ElementTree import fromstring
DOM = fromstring("<html>%s</html>" % html.replace('&nbsp;', '&#160;'))
Asked By: bukzor

||

Answers:

doesn’t fit your requirement of the std only, but beautifulsoup is nice

Answered By: PW.

Take the source code of BeautifulSoup and copy it into your script 😉 I’m only sort of kidding… anything you could write that would do the job would more or less be duplicating the functionality that already exists in libraries like that.

If that’s really not going to work, I have to ask, why is it so important that you only use standard library components?

Answered By: David Z

I cannot think of any popular languages with a good, robust, heuristic HTML parsing library in its stdlib. Python certainly does not have one, which is something I think you know.

Why the requirement of a stdlib module? Most of the time when I hear people make that requirement, they are being silly. For most major tasks, you will need a third party module or to spend a whole lot of work re-implementing one. Introducing a dependency is a good thing, since that’s work you didn’t have to do.

So what you want is lxml.html. Ship lxml with your code if that’s an issue, at which point it becomes functionally equivalent to writing it yourself except in difficulty, bugginess, and maintainability.

Answered By: Mike Graham

Your choices are to change your requirements or to duplicate all of the work done by the developers of third party modules.

Beautiful soup consists of a single python file with about 2000 lines of code, if that is too big of a dependency, then go ahead and write your own, it won’t work as well and probably won’t be a whole lot smaller.

Answered By: mikerobi

Parsing HTML reliably is a relatively modern development (weird though that may seem). As a result there is definitely nothing in the standard library. HTMLParser may appear to be a way to handle HTML, but it’s not — it fails on lots of very common HTML, and though you can work around those failures there will always be another case you haven’t thought of (if you actually succeed at handling every failure you’ll have basically recreated BeautifulSoup).

There are really only 3 reasonable ways to parse HTML (as it is found on the web): lxml.html, BeautifulSoup, and html5lib. lxml is the fastest by far, but can be a bit tricky to install (and impossible in an environment like App Engine). html5lib is based on how HTML 5 specifies parsing; though similar in practice to the other two, it is perhaps more “correct” in how it parses broken HTML (they all parse pretty-good HTML the same). They all do a respectable job at parsing broken HTML. BeautifulSoup can be convenient though I find its API unnecessarily quirky.

Answered By: Ian Bicking

As already stated, there is currently no satisfying solution only with standardlib. I had faced the same problem as you, when I tried to run one of my programs on an outdated hosting environment without the possibility to install own extensions and only python2.6. Solution:

Grab this file and the latest stable BeautifulSoup version of the 3er series (3.2.1 as of now). From the tar-file there, only pick BeautifulSoup.py, it’s the only one that you really need to ship with your code. So you have these two files in your path, all you need to do then, to get a casual etree object from some HTML string, like you would get it from lxml, is this:

from StringIO import StringIO
import ElementSoup

tree = ElementSoup.parse(StringIO(input_str))

lxml itself and html5lib both require you, to compile some C-code in order to make it run. It is considerably more effort to get them working, and if your environment is restricted, or your intended audience not willing to do that, avoid them.

Answered By: Michael