Natural Language processing solutions, like Athena, require a good supply of high quality text.

As well as loading in ad-hoc documents, I’ve given Athena free reign to browse the Internet as required. Its two main sources of information are Wikipedia and BBC News.

Wikipedia is great for providing domain knowledge and key facts, whilst the BBC News site is an excellent source of up to the minute current affairs.

Anybody who has attempted to extract plain text from real world HTML will know that what should be a simple task can quickly snowball into a mammoth project.

There have been many debates on sites like Stackoverflow on how best to do this. Most people start their journey by using regular expressions (regex) – but this is really only viable with well formed and simple HTML. Madness soon follows…

In the real world, HTML is not always well formed and in practice you will also want to ignore such things as adverts, menus and page navigation. To overcome this, you may consider creating a hybrid regex / imperative code parser. Suddenly, this is getting serious…

Luckily, if you’re using C#, you already have the perfect solution in your toolbox – the WebBrowser control in Windows Forms. This control already knows how to render web pages into text and is incredibly tolerant to badly formed HTML.

Using the HtmlDocument property in the WebBrowser control, you can easily navigate the document to find exactly the clean text portions you’re looking for. And, of course, just because this control sits into the System.Windows.Forms namespace, doesn’t mean you can’t use it in other types of application – just be sure to add the relevant assembly reference. One complication is that the WebBrowser control needs to run in its own thread (which is easy to work around).

In the simple example below, I have created a console application that allows you to type in a search phrase on the command line, which is sent to Google, extracting links to the BBC News website and returning relevant, clean, plain text.

Sites like BBC News are very well structured, thanks to their content management system. Therefore, by reading the CSS classname associated with HTML tags, you can easily isolate the information you require.

  1. using System;
  2. using System.Text;
  3. using System.Threading;
  4. using System.Windows.Forms;
  5. class Program
  6. {
  7.     private string _plainText;
  8.     static void Main(string[] args)
  9.     {
  10.         new Program();
  11.     }
  12.     private Program()
  13.     {
  14.         while (true)
  15.         {
  16.             Console.Write(“> “);
  17.             string phrase = Console.ReadLine();
  18.             if (phrase.Length > 0)
  19.             {
  20.                 Thread thread = new Thread(new ParameterizedThreadStart(GetPlainText));
  21.                 thread.SetApartmentState(ApartmentState.STA);
  22.                 thread.Start(phrase);
  23.                 thread.Join();
  24.                 Console.WriteLine();
  25.                 Console.WriteLine(_plainText);
  26.                 Console.WriteLine();
  27.             }
  28.         }
  29.     }
  30.     private void GetPlainText(object phrase)
  31.     {
  32.         string uri = “”;
  33.         WebBrowser _webBrowser = new WebBrowser();
  34.         _webBrowser.Url = new Uri(string.Format(@”{0}&”, phrase));
  35.         while (_webBrowser.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents();
  36.         foreach (HtmlElement a in _webBrowser.Document.GetElementsByTagName(“A”))
  37.         {
  38.             uri = a.GetAttribute(“href”);
  39.             if (uri.StartsWith(“”)) break;
  40.         }
  41.         StringBuilder sb = new StringBuilder();
  42.         WebBrowser webBrowser = new WebBrowser();
  43.         webBrowser.Url = new Uri(uri);
  44.         while (webBrowser.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents();
  45.         // Pick out the main heading.
  46.         foreach (HtmlElement h1 in webBrowser.Document.GetElementsByTagName(“H1”))
  47.             sb.Append(h1.InnerText + “. “);
  48.         // Select only the article text, ignoring everything else.
  49.         foreach (HtmlElement div in webBrowser.Document.GetElementsByTagName(“DIV”))
  50.             if (div.GetAttribute(“classname”) == “story-body”)
  51.                 foreach (HtmlElement p in div.GetElementsByTagName(“P”))
  52.                 {
  53.                     string classname = p.GetAttribute(“classname”);
  54.                     if (classname == “introduction” || classname == “”)
  55.                         sb.Append(p.InnerText + ” “);
  56.                 }
  57.         webBrowser.Dispose();
  58.         _plainText = sb.ToString();
  59.     }
  60. }

This is what the result looks like after searching for British Airways…

Happy screen scraping!