Parse HTML using the HTML Agility Pack

If you haven’t yet heard or explored the HTML Agility Pack, then you must do so. I have been using this library from quite some time to extract links and tags and it works very well. As given on the site:

HTML Agility Pack(HAP) is an agile HTML parser that builds a read/write DOM and supports plain XPATH or XSLT (you actually don't HAVE to understand XPATH nor XSLT to use it, don't worry...). It is a .NET code library that allows you to parse "out of the web" HTML files. The parser is very tolerant with "real world" malformed HTML. The object model is very similar to what proposes System.Xml, but for HTML documents (or streams).

Sample applications of the HAP are:

  • Page fixing or generation. You can fix a page the way you want, modify the DOM, add nodes, copy nodes, well... you name it.
  • Web scanners. You can easily get to img/src or a/hrefs with a bunch XPATH queries.
  • Web scrapers. You can easily scrap any existing web page into an RSS feed for example, with just an XSLT file serving as the binding. An example of this is provided.

Download it here






About The Author

Suprotim Agarwal
Suprotim Agarwal, Developer Technologies MVP (Microsoft Most Valuable Professional) is the founder and contributor for DevCurry, DotNetCurry and SQLServerCurry. He is the Chief Editor of a Developer Magazine called DNC Magazine. He has also authored two Books - 51 Recipes using jQuery with ASP.NET Controls. and The Absolutely Awesome jQuery CookBook.

Follow him on twitter @suprotimagarwal.

6 comments:

Anonymous said...

Hap is not good i always use Regex where i need Hap.

Suprotim Agarwal said...

Regex is powerful, no doubt about that..but Regex not a solution for parsing HTML.

Unknown said...

You have to look on Data Extracting SDK (http://extracting.codeplex.com/)

Anonymous said...
This comment has been removed by the author.
Josh said...

I tried using HAP in one of my projects but the performance slowed down dramatically but with regex it was it was a piece of cake...

for the curious one I am trying to crawl approx 1m domains to extract some selected information from links found on homepage and other pages linked from homepage

Suprotim Agarwal said...

Can you share the Regex that you used.