![]() |
Can I make the spider stop and start on a dime?
I have a specific 36 hour window every week I'm allowed to spider a remote 300k+ page catalog site. I had been using wget in recursive mode, but I have no good way to stop it and restart where it left off the next week. I also have my own format I'd like the data stored in the mysql table.
Can phpdig be bent to meet my needs? or am I better off writing my own using curl and a very large database of urls to crawl, driven by a bash/perl/php script frontend? |
All times are GMT -8. The time now is 01:33 PM. |
Powered by vBulletin® Version 3.7.3
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 2001 - 2005, ThinkDing LLC. All Rights Reserved.