文件名称:baidu
-
所属分类:
- 标签属性:
- 上传时间:2012-11-16
-
文件大小:19.09kb
-
已下载:0次
-
提 供 者:
-
相关连接:无下载说明:别用迅雷下载,失败请重下,重下不扣分!
介绍说明--下载内容来自于网络,使用问题请自行百度
功能:记录蜘蛛爬行轨迹,一个星期内的按爬行时间完全记录,一个星期以前的记录时间(没有删除记录的功能,需要删除的请直接在数据库里面删除即可!).
利用main.asp可以直接一眼看出哪个页面被爬行的最多,利用robots.asp可以查看一个星期内蜘蛛爬行的轨迹.
用法:将上面的这段代码插入到网站ASP页面中即可!本蜘蛛爬行分析器总共只有四个文件,很小巧,哈哈!
-
Function: spider crawling track record, within a week time to complete the record by crawling a week before the recording time (there is no function to delete records, the need to delete it directly in the database can be deleted!).
Main.asp can use at a glance which pages have been crawling up to the use of robots.asp within a week can see the tracks of spiders crawling.
Usage: This above code will be inserted into the site to ASP page! The spiders to crawl a total of only four documents analyzer, it is small, ha ha!
利用main.asp可以直接一眼看出哪个页面被爬行的最多,利用robots.asp可以查看一个星期内蜘蛛爬行的轨迹.
用法:将上面的这段代码插入到网站ASP页面中即可!本蜘蛛爬行分析器总共只有四个文件,很小巧,哈哈!
-
Function: spider crawling track record, within a week time to complete the record by crawling a week before the recording time (there is no function to delete records, the need to delete it directly in the database can be deleted!).
Main.asp can use at a glance which pages have been crawling up to the use of robots.asp within a week can see the tracks of spiders crawling.
Usage: This above code will be inserted into the site to ASP page! The spiders to crawl a total of only four documents analyzer, it is small, ha ha!
相关搜索: 网站asp
(系统自动生成,下载前可以参看下载内容)
下载文件列表
robots_conn.asp
使用方法.txt
#robots.mdb
index.asp
kyuanma.com.txt
说明.htm
main.asp
使用方法.txt
#robots.mdb
index.asp
kyuanma.com.txt
说明.htm
main.asp
本网站为编程资源及源代码搜集、介绍的搜索网站,版权归原作者所有! 粤ICP备11031372号
1999-2046 搜珍网 All Rights Reserved.