资源列表
hyperestraier-1.4.13.tar
- 一个小日本人写的全文搜索引擎,此人比较nb,写了好多东西,这个算是比较nb的啦-Written in a small Japanese full-text search engine, this person was more nb, wrote a lot of things, this is a relatively nb of啦
Luz.Net
- 搜索引擎组件,可以开发大型搜索引擎系统,很好,希望开发搜索引擎的朋友下载-Search engine components, can develop large-scale search engine system, well, hoping to develop search engine friend to download
tse.031114-1404.Linux.tar
- 搜索引擎,tse的源码,对应的搜索引擎的书-search engneer
wget
- 批量下载论坛源码的工具,使用请搜索相关的网页-Batch Download Forum-source tools, use the search for relevant pages
UseHLSSplit(Fix)
- 中文分词处理,delphi调用海量智能分词库,修改了网上另一个版本的错误。-Chinese word processing, delphi call the massive intelligence points thesaurus, revised the online version of the error to another.
1
- 1.建立一个无向图的邻接表存储 2。对该图进行深度优先搜索,按顺序输出所访问的-1. The establishment of an undirected graph in adjacency list is stored 2. Depth-first search of the graph, according to the order of the output of the visit
chinafenci
- 中文分词,读取txt文档然后给词分类,
search
- VB编写的搜索功能小程序,实现BAIDU,GOOGLE等搜索.-VB applet written in search function to achieve BAIDU, GOOGLE and other search.
SogouT.mini.tar
- 百度搜索引擎具有响应速度快、查找结果准确全面、时效性强、无效链接少、符合中文语言特点和中国人使用习惯等优点。 1...这种方法只需对语料中的字组频度进行统计,不需要切分词典,因而又叫做无词典分词法或统计取词方法。但这种方法也有一定- IHTMLDocument3* pHTMLDoc3 HRESULT hr = m_pHTMLDocument2->QueryInterface(IID_IHTMLDocument3, (LPVOID*)&pHTMLDoc3)
111
- 中文分词 词库 分次字典中文分词 词库 分次字典- IHTMLDocument3* pHTMLDoc3 HRESULT hr = m_pHTMLDocument2->QueryInterface(IID_IHTMLDocument3, (LPVOID*)&pHTMLDoc3) ASSERT(SUCCEEDED(hr))
building_search_applications
- 这本书通过比较几个著名的开源的搜索引擎,深入研究了开发搜索引擎过程中的一些核心技术-This book by comparing the number of well-known open-source search engine, in-depth study of the search engine in the process of developing some of the core technology
solr
- 关于solr的介绍,介绍了solr的配置、启动、索引、查询等功能。-about solr