搜索资源列表
Wordsegmentation2
- NLP技术实现,对语料库进行自动统计生成分词词典,对训练集进行分词,列出所有的分词可能并计算每种可能的概率。请使用者自行加入语料库和测试集。-NLP technology to automatically Corpus Health Statistics ingredients dictionary, the training set for segmentation, list all the sub-term may calculate the probability of each pos
tagging
- nlp 用隐马可夫实现语料标记,并对结果进行测试
NLP
- 只要在在文本栏中输入想要测试的字符串,点击切分按钮即可看到普通词切分基本功能的结果。
自动分词与词性标注评测有助于学习NLP
- 自动分词与词性标注评测有助于学习NLP,并且理解其更深的含义,A good Instruction of PoS
nlp
- 中文自然語言處理相關程式,包括中文字頻統計及Jensen-Shannon Divergence計算程式,並包含古典文獻範例-This rar file include natural language processing related programs, includeing Chinese term frequency statistics, Jensen-Shannon Divergence program and text examples.
NLTK
- 本书本源于国外大学课堂上的讲义,但是由于其配有相应的运行环境,获得了很好的教学效果,后编写为书。该书主要用于中文信息处理领域,对信息处理的各个子问题都进行了深入的讲解。 -The book covers a wide range of introductory topics in NLP, and shows how to do all the processing tasks using the toolkit. The toolkit s reference documentation
CRF-0.53
- crf++-0.53.zip CRF++ is a simple, customizable, and open source implementation of Conditional Random Fields (CRFs) for segmenting/labeling sequential data. CRF++ is designed for generic purpose and will be applied to a variety of NLP tasks, such as N
NLP-test-and-amendament
- 自然語言處理作業(NLP)-期中考試卷訂正 Term Explanation(需各舉一個例子做說明)-Word-sense disambiguation In computational linguistics, word sense disambiguation (WSD) is the process of identifying which sense of a word is used in any given sentence, when the word has a numb
nlp
- 自然语言分词,内带有词典以及需要分词的内容-natural language divide
NLP
- 中文信息处理基础-詹卫东,清华教授的自然语言处理教学ppt相关文档和源代码-Chinese information processing base- Zhan Weidong, Tsinghua professor of natural language processing teaching ppt related documentation and source code
自动机理论、语言和计算导论(原书第3版).pdf
- 自然语言处理基础教程《自动机、语言和计算导论》中文第三版。(An introductory book to NLP.)
计算语言学-常宝宝
- 北京大学《计算语言学》课程讲义,按章节划分。(Slides of Computational Linguistics course -- Peking University.)
jieba_plus
- 解决jieba分词中部分bug,包括全角字母和数字等,更新中(solve part of the bugs in Jieba segmentation, update)
NamedEntityRecognition
- NamedEntityRecognition
HanLP-master
- NamedEntityRecognition github
jiebacut.py
- 通过结巴分词处理中文分词问题【对文本进行分词以及词频统计处理】。(The problem of Chinese participle is dealt with by the branch word segmentation.)
wiki_100
- 使用Wikipedia中文训练的100维词向量(100 dimensional word vectors used in Chinese training in Wikipedia)
boson nlp for ner
- boson nlp for ner boson nlp for ner
BosonNLP_sentiment_score
- 文本情感分析,语义分析,积极语气,消极语气(sentiment analysis for short-texts)