文件名称:SubjectSpider_ByKelvenJU
-
所属分类:
- 标签属性:
- 上传时间:2008-10-13
-
文件大小:1.82mb
-
已下载:1次
-
提 供 者:
-
相关连接:无下载说明:别用迅雷下载,失败请重下,重下不扣分!
介绍说明--下载内容来自于网络,使用问题请自行百度
1、锁定某个主题抓取;
2、能够产生日志文本文件,格式为:时间戳(timestamp)、URL;
3、抓取某一URL时最多允许建立2个连接(注意:本地作网页解析的线程数则不限)
4、遵守文明蜘蛛规则:必须分析robots.txt文件和meta tag有无限制;一个线程抓完一个网页后要sleep 2秒钟;
5、能对HTML网页进行解析,提取出链接URL,能判别提取的URL是否已处理过,不重复解析已crawl过的网页;
6、能够对spider/crawler程序的一些基本参数进行设置,包括:抓取深度(depth)、种子URL等;
7、使用User-agent向服务器表明自己的身份;
8、产生抓取统计信息:包括抓取速度、抓取完成所需时间、抓取网页总数;重要变量和所有类、方法加注释;
9、请遵守编程规范,如类、方法、文件等的命名规范,
10、可选:GUI图形用户界面、web界面,通过界面管理spider/crawler,包括启停、URL增删等
-1, the ability to lock a particular theme crawls; 2, can produce log text file format : timestamp (timestamp), the URL; 3. crawls up a URL to allow for the establishment of two connecting (Note : local website for a few analytical thread is not limited) 4, abide by the rules of civilized spiders : to be analyzed robots.txt file and meta tag unrestricted; End grasp a thread after a website to sleep two seconds; 5, capable of HTML pages for analysis, Links to extract URL, the extract can judge whether the URL have been processed. Analysis has not repeat crawl over the web; 6. to the spider / crawler some of the basic procedures for setting up parameters, including : Grasp depth (depth), seeds URL; 7. use User-agent to the server to identify themselves; 8, crawls produce statistical informati
2、能够产生日志文本文件,格式为:时间戳(timestamp)、URL;
3、抓取某一URL时最多允许建立2个连接(注意:本地作网页解析的线程数则不限)
4、遵守文明蜘蛛规则:必须分析robots.txt文件和meta tag有无限制;一个线程抓完一个网页后要sleep 2秒钟;
5、能对HTML网页进行解析,提取出链接URL,能判别提取的URL是否已处理过,不重复解析已crawl过的网页;
6、能够对spider/crawler程序的一些基本参数进行设置,包括:抓取深度(depth)、种子URL等;
7、使用User-agent向服务器表明自己的身份;
8、产生抓取统计信息:包括抓取速度、抓取完成所需时间、抓取网页总数;重要变量和所有类、方法加注释;
9、请遵守编程规范,如类、方法、文件等的命名规范,
10、可选:GUI图形用户界面、web界面,通过界面管理spider/crawler,包括启停、URL增删等
-1, the ability to lock a particular theme crawls; 2, can produce log text file format : timestamp (timestamp), the URL; 3. crawls up a URL to allow for the establishment of two connecting (Note : local website for a few analytical thread is not limited) 4, abide by the rules of civilized spiders : to be analyzed robots.txt file and meta tag unrestricted; End grasp a thread after a website to sleep two seconds; 5, capable of HTML pages for analysis, Links to extract URL, the extract can judge whether the URL have been processed. Analysis has not repeat crawl over the web; 6. to the spider / crawler some of the basic procedures for setting up parameters, including : Grasp depth (depth), seeds URL; 7. use User-agent to the server to identify themselves; 8, crawls produce statistical informati
(系统自动生成,下载前可以参看下载内容)
下载文件列表
tradlexu8.txt
data/sforeign_u8.txt
data/snotname_u8.txt
data/snumbers_u8.txt
data/ssurname_u8.txt
data/tforeign_u8.txt
data/tnotname_u8.txt
data/tnumbers_u8.txt
data/tsurname_u8.txt
CheckLinks.java
HTMLParse.java
ISpiderReportable.java
segmenter.java
Spider.java
bothlexu8.txt
simplexu8.txt
data
tf.java
日志文件[www.scut.edu.cn].rar
日志文件[money.163.com].rar
祝庆荣-主题网络蜘蛛程序设计及JAVA实现.doc
www.dssz.com.txt
data/sforeign_u8.txt
data/snotname_u8.txt
data/snumbers_u8.txt
data/ssurname_u8.txt
data/tforeign_u8.txt
data/tnotname_u8.txt
data/tnumbers_u8.txt
data/tsurname_u8.txt
CheckLinks.java
HTMLParse.java
ISpiderReportable.java
segmenter.java
Spider.java
bothlexu8.txt
simplexu8.txt
data
tf.java
日志文件[www.scut.edu.cn].rar
日志文件[money.163.com].rar
祝庆荣-主题网络蜘蛛程序设计及JAVA实现.doc
www.dssz.com.txt
本网站为编程资源及源代码搜集、介绍的搜索网站,版权归原作者所有! 粤ICP备11031372号
1999-2046 搜珍网 All Rights Reserved.