Python html parsing that actually works

前端 未结 5 1185
轻奢々
轻奢々 2021-01-31 21:09

I\'m trying to parse some html in Python. There were some methods that actually worked before... but nowadays there\'s nothing I can actually use without workarounds.

相关标签:
5条回答
  • 2021-01-31 21:22

    I've used pyparsing for a number of HTML page scraping projects. It is a sort of middle-ground between BeautifulSoup and the full HTML parsers on one end, and the too-low-level approach of regular expressions (that way lies madness).

    With pyparsing, you can often get good HTML scraping results by identifying the specific subset of the page or data that you are trying to extract. This approach avoids the issues of trying to parse everything on the page, since some problematic HTML outside of your region of interest could throw off a comprehensive HTML parser.

    While this sounds like just a glorified regex approach, pyparsing offers builtins for working with HTML- or XML-tagged text. Pyparsing avoids many of the pitfalls that frustrate the regex-based solutions:

    • accepts whitespace without littering '\s*' all over your expression
    • handles unexpected attributes within tags
    • handles attributes in any order
    • handles upper/lower case in tags
    • handles attribute names with namespaces
    • handles attribute values in double quotes, single quotes, or no quotes
    • handles empty tags (those of the form <blah />)
    • returns parsed tag data with object-attribute access to tag attributes

    Here's a simple example from the pyparsing wiki that gets <a href=xxx> tags from a web page:

    from pyparsing import makeHTMLTags, SkipTo
    
    # read HTML from a web page
    page = urllib.urlopen( "http://www.yahoo.com" )
    htmlText = page.read()
    page.close()
    
    # define pyparsing expression to search for within HTML    
    anchorStart,anchorEnd = makeHTMLTags("a")
    anchor = anchorStart + SkipTo(anchorEnd).setResultsName("body") + anchorEnd
    
    for tokens,start,end in anchor.scanString(htmlText):
        print tokens.body,'->',tokens.href
    

    This will pull out the <a> tags, even if there are other portions of the page containing problematic HTML. There are other HTML examples at the pyparsing wiki:

    • http://pyparsing.wikispaces.com/file/view/makeHTMLTagExample.py
    • http://pyparsing.wikispaces.com/file/view/getNTPserversNew.py
    • http://pyparsing.wikispaces.com/file/view/htmlStripper.py
    • http://pyparsing.wikispaces.com/file/view/withAttribute.py

    Pyparsing is not a total foolproof solution to this problem, but by exposing the parsing process to you, you can better control which pieces of the HTML you are specifically interested in, process them, and skip the rest.

    0 讨论(0)
  • 2021-01-31 21:28

    I think the problem is that most HTML is ill-formed. XHTML tried to fix that, but it never really caught on enough - especially as most browsers do "intelligent workarounds" for ill-formed code.

    Even a few years ago I tried to parse HTML for a primitive spider-type app, and found the problems too difficult. I suspect writing your own might be on the cards, although we can't be the only people with this problem!

    0 讨论(0)
  • 2021-01-31 21:29

    html5lib cannot parse half of what's "out there"

    That sounds extremely implausible. html5lib uses exactly the same algorithm that's also implemented in recent versions of Firefox, Safari and Chrome. If that algorithm broke half the web, I think we would have heard. If you have particular problems with it, do file bugs.

    0 讨论(0)
  • 2021-01-31 21:31

    If you are scraping content, an excellent way to get around irritating details is the sitescraper package. It uses machine learning to determine which content to retrieve for you.

    From the homepage:

    >>> from sitescraper import sitescraper
    >>> ss = sitescraper()
    >>> url = 'http://www.amazon.com/s/ref=nb_ss_gw?url=search-alias%3Daps&field-keywords=python&x=0&y=0'
    >>> data = ["Amazon.com: python", 
                 ["Learning Python, 3rd Edition", 
                 "Programming in Python 3: A Complete Introduction to the Python Language (Developer's Library)", 
                 "Python in a Nutshell, Second Edition (In a Nutshell (O'Reilly))"]]
    >>> ss.add(url, data)
    >>> # we can add multiple example cases, but this is a simple example so 1 will do (I   generally use 3)
    >>> # ss.add(url2, data2) 
    >>> ss.scrape('http://www.amazon.com/s/ref=nb_ss_gw?url=search-alias%3Daps&field-  keywords=linux&x=0&y=0')
    ["Amazon.com: linux", ["A Practical Guide to Linux(R) Commands, Editors, and Shell    Programming", 
    "Linux Pocket Guide", 
    "Linux in a Nutshell (In a Nutshell (O'Reilly))", 
    'Practical Guide to Ubuntu Linux (Versions 8.10 and 8.04), A (2nd Edition)', 
    'Linux Bible, 2008 Edition: Boot up to Ubuntu, Fedora, KNOPPIX, Debian, openSUSE, and 11 Other Distributions']]
    
    0 讨论(0)
  • 2021-01-31 21:47

    Make sure that you use the html module when you parse HTML with lxml:

    >>> from lxml import html
    >>> doc = """<html>
    ... <head>
    ...   <title> Meh
    ... </head>
    ... <body>
    ... Look at this interesting use of <p>
    ... rather than using <br /> tags as line breaks <p>
    ... </body>"""
    >>> html.document_fromstring(doc)
    <Element html at ...>
    

    All the errors & exceptions will melt away, you'll be left with an amazingly fast parser that often deals with HTML soup better than BeautifulSoup.

    0 讨论(0)
提交回复
热议问题