应用scikit-learn做文本分类

时间:2026-01-15

应用scikit-learn做文本分类

分类: Data Mining Machine Learning Python2014-04-13 20:53 12438人阅读 评论(16) 收藏 举报

文本挖掘的paper没找到统一的benchmark,只好自己跑程序,走过路过的前辈如果知道20newsgroups或者其它好用的公共数据集的分类(最好要所有类分类结果,全部或取部分特征无所谓)麻烦留言告知下现在的benchmark,万谢!

嗯,说正文。20newsgroups官网上给出了3个数据集,这里我们用最原始的。

分为以下几个过程:

加载数据集 提feature 分类

o Naive Bayes o KNN

o SVM

聚类

说明: scipy官网上有参考,但是看着有点乱,而且有bug。本文中我们分块来看。

Environment:Python 2.7 + Scipy (scikit-learn)

1.加载数据集

1. #first extract the 20 news_group dataset to /scikit_learn_data

2. from sklearn.datasets import fetch_20newsgroups

3. #all categories

4. #newsgroup_train = fetch_20newsgroups(subset='train')

5. #part categories

6. categories = ['comp.graphics',

7. 'comp.os.ms-windows.misc', 8. 'comp.sys.ibm.pc.hardware',

9. 'comp.sys.mac.hardware',

10. 'comp.windows.x'];

11. newsgroup_train = fetch_20newsgroups(subset = 'train',categories = categories);

可以检验是否load好了:

[python] view plaincopy

1. #print category names

2. from pprint import pprint

3. pprint(list(newsgroup_train.target_names))

结果:

['comp.graphics',

'comp.os.ms-windows.misc',

'comp.sys.ibm.pc.hardware',

'comp.sys.mac.hardware',

'comp.windows.x']

2. 提feature: 刚才load进来的newsgroup_train就是一篇篇document,我们要从中提取feature,即词频啊神马的,用fit_transform

Method 1. HashingVectorizer,规定feature个数 [python] view plaincopy

1. #newsgroup_train.data is the original documents, but we need to extract the

2. #feature vectors inorder to model the text data

3. from sklearn.feature_extraction.text import HashingVectorizer

4. vectorizer = HashingVectorizer(stop_words = 'english',non_negative = True,

5. n_features = 10000)

6. fea_train = vectorizer.fit_transform(newsgroup_train.data)

7. fea_test = vectorizer.fit_transform(newsgroups_test.data);

8.

9.

10. #return feature vector 'fea_train' [n_samples,n_features]

11. print 'Size of fea_train:' + repr(fea_train.shape)

12. print 'Size of fea_train:' + repr(fea_test.shape)

13. #11314 documents, 130107 vectors for all categories

14. print 'The average feature sparsity is {0:.3f}%'.format(

15. fea_train.nnz/float(fea_train.shape[0]*fea_train.shape[1])*100);

结果:

Size of fea_train:(2936, 10000)

Size of fea_train:(1955, 10000)

The average feature sparsity is 1.002%

因为我们只取了10000个词,即10000维feature,稀疏度还不算低。而实际上用

TfidfVectorizer统计可得到上万维的feature,我统计的全部样本是13w多维,就是一个相当稀疏的矩阵了。

**************************************************************************************************************************

上面代码注释说TF-IDF在train和test上提取的feature维度不同,那么怎么让它们相同呢?有两种方法:

Method 2. CountVectorizer+TfidfTransformer

让两个CountVectorizer共享vocabulary: [python] view plaincopy

1. #----------------------------------------------------

2. #method 1:CountVectorizer+TfidfTransformer

3. print '*************************\nCountVectorizer+TfidfTransformer\n********

*****************'

4. from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer

5. count_v1= CountVectorizer(stop_words = 'english', max_df = 0.5);

6. counts_train = count_v1.fit_transform(newsgroup_train.data);

7. print "the shape of train is "+repr(counts_train.shape)

8.

9. count_v2 = CountVectorizer(vocabulary=count_v1.vocabulary_);

10. counts_test = count_v2.fit_transform(newsgroups_test.data);

11. print "the shape of test is "+repr(counts_test.shape)

12.

13. tfidftransformer = TfidfTransformer();

14.

15. tfidf_train = tfidftransformer.fit(counts_train).transform(counts_train);

16. tfidf_test = tfidftransformer.fit(counts_test).transform(counts_test);

结果:

*************************

CountVectorizer+TfidfTransformer

*************************

the shape of train is (2936, 66433)

the shape of test is (1955, 66433)

Method 3. TfidfVectorizer

让两个TfidfVectorizer共享vocabulary: [python] view plaincopy

1. #method 2:TfidfVectorizer

2. print '*************************\nTfidfVectorizer\n*************************

'

3. from sklearn.feature_extraction.text import TfidfVectorizer

4. tv = TfidfVectorizer(sublinear_tf = True,

5. max_df = 0.5,

6. stop_words = 'english');

7. tfidf_train_2 = tv.fit_transform(newsgroup_train.data);

8. tv2 = TfidfVectorizer(vocabulary = tv.vocabulary_);

9. tfidf_test_2 = tv2.fit_transform(newsgroups_test.data);

10. print "the shape of train is "+repr(tfidf_train_2.shape)

11. print "the shape of test is "+repr(tfidf_test_2.shape)

12. analyze = tv.build_analyzer()

13. tv.get_feature_names()#statistical features/terms

结果:

*************************

TfidfVectorizer

*************************

the shape of train is (293 …… 此处隐藏:6699字,全部文档内容请下载后查看。喜欢就下载吧 ……

应用scikit-learn做文本分类.doc 将本文的Word文档下载到电脑

    精彩图片

    热门精选

    大家正在看

    × 游客快捷下载通道(下载后可以自由复制和排版)

    限时特价:4.9 元/份 原价:20元

    支付方式:

    开通VIP包月会员 特价:19元/月

    注:下载文档有可能“只有目录或者内容不全”等情况,请下载之前注意辨别,如果您已付费且无法下载或内容有问题,请联系我们协助你处理。
    微信:fanwen365 QQ:370150219