Hello,
I have big experience in scraping data from web and extracting clean data from different sources.
In more than 5 years of work in this field, I have scraped more than 1500 different kind of websites (simple html, with javascript, with protection - limited number of requests, with login...) and save data in different formats or databases (.csv, .xls, .json, MySql, MongoDb...). One website had ~130 millions of records (simple html and it was done for ~10 days). I am using proxies for rotating IPs so on every request I have different IP address.
The tool I am using for web scraping is my own code written in python (I can use python2 or python3). Libraries - mostly selenium, requests with beautifulsoup. But I can work with other python libraries like urllib, urllib2, mechanize, dryscrape... Basically, system works fast like scrapy and that is why I am not using scrapy.
I am fast, efficient and responsible and I can find solution for most of problems online. So learning new things is not problem for me. If you are interested in collaboration - contact me anytime so we can make a deal.
Thank you for your time.
Igor