Further thoughts on hindering screen-scraping

We previously listed some means to try to stop screen-scraping, but since it is an ongoing topic for us, it bears revisiting.  Any site can be scraped, but some require such an influx of time and resources as to make it prohibitively expensive.  Some of the common methods to do so are:

Turing tests

The most common implementation of the Turning Test is the old CAPTCHA that tries to ensure a human reads the text in an image, and feeds it into a form.

We have found a large number of sites that implement a very weak CAPTCHA that takes only a few minutes to get around. On the other hand, there are some very good implementations of Turing Tests that we would opt not to deal with given the choice, but a sophisticated OCR can sometimes overcome those, or many bulletin board spammers have some clever tricks to get past these.

Data as images

Sometimes you know which parts of your data are valuable. In that case it becomes reasonable to replace such text with an image. As with the Turing Test, there is ORC software that can read it, and there’s no reason we can’t save the image and have someone read it later.

Often times, however, listing data as an image without a text alternate is in violation of the Americans with Disabilities Act (ADA), and can be overcome with a couple of phone calls to a company’s legal department.

Code obfuscation

Using something like a JavaScript function to show data on the page though it’s not anywhere in the HTML source is a good trick. Other examples include putting prolific, extraneous comments through the page or having an interactive page that orders things in an unpredictable way (and the example I think of used CSS to make the display the same no matter the arrangement of the code.)

CSS Sprites

Recently we’ve encountered some instances where a page has one images containing numbers and letters, and used CSS to display only the characters they desired.  This is in effect a combination of the previous 2 methods.  First we have to get that master-image and read what characters are there, then we’d need to read the CSS in the site and determine to what character each tag was pointing.

While this is very clever, I suspect this too would run afoul the ADA, though I’ve not tested that yet.

Limit search results

Most of the data we want to get at is behind some sort of form. Some are easy, and submitting a blank form will yield all of the results. Some need an asterisk or percent put in the form. The hardest ones are those that will give you only so many results per query. Sometimes we just make a loop that will submit the letters of the alphabet to the form, but if that’s too general, we must make a loop to submit all combination of 2 or 3 letters–that’s 17,576 page requests.

IP Filtering

On occasion, a diligent webmaster will notice a large number of page requests coming from a particular IP address, and block requests from that domain.  There are a number of methods to pass requests through alternate domains, however, so this method isn’t generally very effective.

Site Tinkering

Scraping always keys off of certain things in the HTML.  Some sites have the resources to constantly tweak their HTML so that any scrapes are constantly out of date.  Therefore it becomes cost ineffective to continually update the scrape for the constantly changing conditions.

2 thoughts on “Further thoughts on hindering screen-scraping”

  1. I just found another way to hinder screen-scraping. Take foreclosureradar.com, for example. The content is entirely in Flash.

    I was asked to attempt to scrape this site by an individual who is friends with the owner of the site (his friend said it can’t be scraped). I started off with the usual digging by proxying the site using screen-scraper’s proxy and Charles Proxy. I was not able to find the source of the data I was seeing in my browser. I then decided to pull out the big guns and ran the site using Wireshark.

    In Wireshark I found what looks like serialized responses from the server. I’m not certain that the data is being transfered this way but I am certain it is not in plain text.

    One way to find out more of what is happening in a Flash movie is to decompile it. I’ve done this in the past using showmycode.com or Trillix Flash Decompiler but the one SWF file I could find would not decompile.

    So, if you want to hinder screen-scraping you can run your site in Flash and somehow encode the data transmission. Oh, and make sure your SWF file won’t decompile, too.

Leave a Comment