I’ve done extensive work with link validation in websites, using a mix of Ruby / Anemone (spidering library) and Watir (web automation library.)
In this post I’ll cover a similar approach from the python side using Python and BeautifulSoup. What’s nice about this pairing is that it’s all part of the standard library in Python. You don’t have to install or download any extra modules.
I won’t claim that my code is clean or even very good, but in a few hours I put together a Python script that simply ran a scan of all “a” tags on a home page, then iterated over each link on the home page to collect all their links. This gives an idea of what can be accomplished and what other interesting variations could be created.
Spidering Script in Python from bs4 import BeautifulSoup import urllib import re home_page_urls = [] # List for the homepage links all_links = [] # List where I'll drop all links later def scan(base_url, location=0): html = urllib.urlopen(base_url) bt = BeautifulSoup(html.read(), 'lxml') links = bt.find_all('a') # BeautifulSoup grabs all a tags on page for link in links: # Iterating over each a tag a = link['href'] # Grabbing the href value for each a tag if re.match('^/[a-z1-9]', a): # Regex matching relative links (i.e. /test.html) if location == 0: home_page_urls.append(base_url + a) all_links.append(base_url + a) elif location == 1: all_links.append(base_url + a) elif re.match('(http://\S+)', a): if location == 0: home_page_urls.append(a) all_links.append(a) elif location == 1: all_links.append(a) print "[*] Total links captured so far: " + str(len(all_links)) def sub_page_scan(base_url): # First get links on start page print "[*] Starting Test on " + base_url scan(base_url) for url in home_page_urls: scan(url, 1) all_link_no_dups = list(set(all_links)) for link in all_link_no_dups: print link + '\n' sub_page_scan('http://somesite') Python SpiderScript BreakdownI built out two methods in Python. The first is the engine that will find all the links on a given url/page. Since there are different layers on a site (Home page / sub page, sub sub page, etc.) I choose to only care about 2 layers (homepage and subpage.) I used a location parameter (which is defaulted to location value of 0.) The value 0 I treat as the homepage… any other value is a sub page.
Therefore if the script is called to a url, I default to hit the page sent in as a parameter and default it to be treated as a homepage. That list will be iterated over later, to collect all the links on each sub page.
I’m able to grab the links per page,using BeautifulSoup to grab all the “a” tags:
bt = BeautifulSoup(html.read(), 'lxml') links = bt.find_all('a')
This will be a list of each entire a tag, like <a href=”/blah.html”>blah</a> Since I only want to collect the juicy URL, I iterate over this collection with a for loop… each item in the loop I then attach the [‘href’] component to which now gives me the URL for each “a” tag that BeautifulSoup found. Relative vs. Full URLs In the case of a site with relative paths that are prepended with a “/”, I added some regex to grab those URL’s and prepend the base URL to them. If however the link pulled out of the a tag is a full URL (starting with HTTP), I