Scraping Websites with Ruby
Web scraping is to extract or harvest data from websites using programs to automatically fetch/extract data after per-determined intervals. It is quite similar to a search engine bot crawling a website, the only difference here being that we'd be looking for specific data. We may scrape websites to fetch data into specific formats, or just to have data automatically available when required, or may be to automated a specific process. For example once I had written a small script to scrape the Indian Railways' website to check the status of my PNR and intimate me via email in case there is any change in the status.
In the article we'll be look at scraping web pages using the Ruby language and we'll be using the Ruby module Mechanize. Mechanize make the underlying task of following links, submitting forms, etc. very easy so that you may concentrate on the logic of data extraction.
Installing Mechanize is very easy, just issue the following command on a terminal as root.
Say, we'd like to search distrowatch.com for all available Linux distributions and save the data. First, let's see how can we submit forms with Mechanize.
We can click on links, programatically and set custom user-agent, some websites do not allow programmatic access. Let's see how.
You can use Nokogiri to parse HTML and traverse the DOM to extract data, I have written about it in HTML Parsing in Ruby. Enjoy scraping the web.
|All times are GMT +5.5. The time now is 09:10.|