So you want the site's whole database to be queried, one page at a time. It may take a while, and due to the site limit of 50 results per query, must be done with refining steps (see note below).
I've done this kind of page scraping in the past; I'd use Python + beautifulsoup. One or two working days should be enough to finish it. I will deliver:
* the requested tool, a Python script
* complete source code in Python
* a setup program
* and of course, complete support for setting up and using it
Note: The site limits any query results to 50 records. So one has to refine the query terms until such size limit is not hit anymore. That is: 'ab' returns more than 50 records, try 'aba' (again more than 50), refine again 'abaa' (no results), 'abab' (2 results), 'abac' and so on. In this scenario, we cannot guarantee completeness, at least theoretically. Consider the case when there are more than 50 records with 'Smith': only the first 50 will be retrieved, but I cannot refine the query more than that.