If you haven’t yet heard or explored the HTML Agility Pack, then you must do so. I have been using this library from quite some time to extract links and tags and it works very well. As given on the site:
HTML Agility Pack(HAP) is an agile HTML parser that builds a read/write DOM and supports plain XPATH or XSLT (you actually don't HAVE to understand XPATH nor XSLT to use it, don't worry...). It is a .NET code library that allows you to parse "out of the web" HTML files. The parser is very tolerant with "real world" malformed HTML. The object model is very similar to what proposes System.Xml, but for HTML documents (or streams).
Sample applications of the HAP are:
- Page fixing or generation. You can fix a page the way you want, modify the DOM, add nodes, copy nodes, well... you name it.
- Web scanners. You can easily get to img/src or a/hrefs with a bunch XPATH queries.
- Web scrapers. You can easily scrap any existing web page into an RSS feed for example, with just an XSLT file serving as the binding. An example of this is provided.
Download it here
Will you give this article a +1 ? Thanks in advance