Archiving (Scraping) a Site
Last week I wrote a little post and script to archive a directory. This can be really helpful if you're looking to backup some legacy code, especially a deeply nested old website, but what if the files only live online? How do you reach out to the internets and archive (scrape) a front-end?
There are a few different steps needed to accomplish this. Each step involves its own logic steps and PHP extension dependencies so I found it helpful to break up the script into obvious stages. This also makes it easier for users to jump in and modify pieces if they need to, as my initial version does make a lot of assumptions.
Grabbing a Resource
Reaching out and downloading a resource from a website isn't that hard, regardless if the resource is an HTML page, JPG image, or CSS asset. I leaned on the old cURL library to do this step. There are plenty of ways to manipulate the headers sent in case you need to worry about scrape blockers (like I did when I was playing with LinkedIn) with cURL plus some helper functions to get the sent content-type response, which will be helpful for the next step.
// first, we scrape and save locally
$curl_handle = curl_init();
curl_setopt($curl_handle, CURLOPT_HEADER, false);
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, true);
// $link_array is a list of the links to scrape, start with domain
$link_array[] = $domain;
for ($i = 0; $i < count($link_array); $i++) {
curl_setopt($curl_handle, CURLOPT_URL, $link_array[$i]);
$curl_result = curl_exec($curl_handle);
$curl_header = curl_getinfo($curl_handle, CURLINFO_CONTENT_TYPE);
}
Parsing the Resource
Once you get the first page how do you get more? Well, there are a few ways to crawl a site. You can depend on the sitemap.xml (if it exists), parsing the xml to get a list of all the HTML pages. You could pre-populate $link_array if you know the full structure of the site. Or you could parse each response and look for more links. Like a crawler.
Here is where my assumptions start to leak in. I only wanted HTML, CSS, and images from the site, not Javascript or other assets linked. For parsing of the HTML I leaned on DomDocument, a helpful PHP class that has some shortcomings when it comes to improperly formed docments. I assumed that HTML files would be linked in 'a' tags, CSS files in 'link' tags, and images in 'img'. Here's a snippet that uses DomDocument to grab these links and append them to the array of links.
// only run if content-type is html
$document = new DOMDocument();
@($document->loadHTML($curl_result)); // darn you invalid html
// grab all normal 'a' links
$a_node_list = $document->getElementsByTagName('a');
foreach ($a_node_list as $a_node) {
$link = $a_node->attributes->getNamedItem('href')->nodeValue;
$link = get_scrapeable_link($link, $link_array[$i]);
if(should_add_to_scrape_list($link, $link_array))
$link_array[] = $link;
}
// grab css file links
$link_node_list = $document->getElementsByTagName('link');
foreach ($link_node_list as $link_node) {
$link = $link_node->attributes->getNamedItem('href')->nodeValue;
$link = get_scrapeable_link($link, $link_array[$i]);
if(should_add_to_scrape_list($link, $link_array))
$link_array[] = $link;
}
// grab image links
$image_node_list = $document->getElementsByTagName('img');
foreach ($image_node_list as $image_node) {
$link = $image_node->attributes->getNamedItem('src')->nodeValue;
$link = get_scrapeable_link($link, $link_array[$i]);
if(should_add_to_scrape_list($link, $link_array))
$link_array[] = $link;
}
DomDocument has a few handy methods to lean on, allowing me to target specific nodes and their attributes without complicated regular expressions. I did abstract out a few pieces of logic, like the clean up of the URLs (to get rid of get attributes and anchor tags, plus figure out relative link logic) and the check to make sure we wanted to crawl the link (only want to scrape internal links).
This is great, but what about image links in the CSS? DomDocument doesn't do CSS. For this I used regular expressions. There are some CSS parsers out there that other people have built that seemed to hefty for such a simple task.
// only run if content-type is css
preg_match_all('/url\([\'"]?(.+?)[\'"]?\)/i', $curl_result, $matches);
foreach ($matches[1] as $link) {
$link = get_scrapeable_link($link, $link_array[$i]);
if(should_add_to_scrape_list($link, $link_array))
$link_array[] = $link;
}
So, assuming that Content-Type is accurate, I now had a loop that would go through a html page, grab all the links to other pages, images, and stylesheets, and then continue to loop through those and parse them until everything (everything linked) was grabbed from a single domain. I could save it all directly into the archive object. Instead I saved it to a temp directory. Between juggling cURL requests and an Archive object I was concerned about memory usage. Saving each resource locally (after parsing out the wanted links) seemed like a good way to shelf it until I wanted to archive them.
// original loop
for ($i = 0; $i < count($link_array); $i++) {
// curl execution step here
// content-type detection and parsing here
// now, figure out what to name the file locally and save
$local_path = $link_array[$i];
if (substr($link_array[$i], -1) == '/')
$local_path .= 'index.html';
$local_path = str_replace($domain, '', $local_path);
$local_path_list = explode('/', $local_path);
$local_file = array_pop($local_path_list);
$path = $directory_path;
foreach ($local_path_list as $local_path_piece) {
$path .= $local_path_piece . DIRECTORY_SEPARATOR;
if (!is_dir($path))
mkdir($path);
}
$file_handle = fopen($path . $local_file, 'w');
fwrite($file_handle, $curl_result);
fclose($file_handle);
}
There are two main pieces to keep in mind here. First, many websites simplify their URLs by not referencing an exact file (like my websites). So, in those cases, I saved the response as an 'index.html' within the URL structure. And that's the second piece - I wanted to maintain directory structure, which meant I had to map the URL structure to a directory structure.
Archiving
Things got easy from this point. Using a script similar to my archiver all I had to do was loop through the temp directory, adding files to the archive object, and then save. Since the files were saved in a temp directory I also needed to delete the directory. One catch - you need to save the archive before deleting the files referenced or else the archive won't capture the files. So, again, taking the task one small step at a time.
And that's it! The final script can be found on my github account (scraper). It's still very beta, since it's hard to predict how web pages are structured. I did test it on a handful sites and it seems to handle well on basic structures, and tweaking the script to handle side cases shouldn't be too tough.
Comments (0)