![]() |
Exclude filenames with certain attributes?
All installed and spidering nicely. However, the directory in which the content resides contains a mailarchive of 8000 messages, each of which has its own .html file. Additionally it contains indexes for every 15 messages, again each index has its own .html file.
If the files are in the form: http://www.mydomain.com/message001.html http://www.mydomain.com/message002.html http://www.mydomain.com/message003.html etc and http://www.mydomain.com/index001.html http://www.mydomain.com/index002.html is it possible to restrict the spider so that any file which contains the characters "index" in the title is ignored or, alternatively, restrict the spider such that it only searches files which start with the characters "message"? I had hoped that robots.txt would have been the answer, but you cannot use wildcards to specify exclusions. Typing in a list of the path of every file beginning with "index" is not an option (there are over 150 of them) and in any case, the content is updated every day.... Any help MUCH appreciated |
|
Thanks for your help, which is much appreciated, but I am still failing. Here is what I did:
I commented out the old FORBIDDEN_EXTENSIONS line and replaced it with PHP Code:
Then in robot_functions php file, I changed the 3 instances of regs[5] around the !eregi(BANNED,$regs[5]) piece for regs[2] However, the spider still continues to index files named index123.html Where am I going wrong? |
like this
PHP Code:
remove cgi|php|asp|pl| if you want index of links end with that extensions you need to go admin update - delete links you no want - run the cleans - edit FORBIDDEN_EXTENSIONS - index |
Quote:
|
Great! That works! :banana:
Thanks for your help. |
All times are GMT -8. The time now is 12:47 AM. |
Powered by vBulletin® Version 3.7.3
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 2001 - 2005, ThinkDing LLC. All Rights Reserved.