Skip to content

Commit 7e4794c

Browse files
add data
1 parent 5617a9a commit 7e4794c

File tree

7 files changed

+36782
-0
lines changed

7 files changed

+36782
-0
lines changed

《Python数据挖掘入门与实践》/data/ad.data

Lines changed: 3279 additions & 0 deletions
Large diffs are not rendered by default.

《Python数据挖掘入门与实践》/data/adult.data

Lines changed: 32562 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
| This data was extracted from the census bureau database found at
2+
| http://www.census.gov/ftp/pub/DES/www/welcome.html
3+
| Donor: Ronny Kohavi and Barry Becker,
4+
| Data Mining and Visualization
5+
| Silicon Graphics.
6+
| e-mail: [email protected] for questions.
7+
| Split into train-test using MLC++ GenCVFiles (2/3, 1/3 random).
8+
| 48842 instances, mix of continuous and discrete (train=32561, test=16281)
9+
| 45222 if instances with unknown values are removed (train=30162, test=15060)
10+
| Duplicate or conflicting instances : 6
11+
| Class probabilities for adult.all file
12+
| Probability for the label '>50K' : 23.93% / 24.78% (without unknowns)
13+
| Probability for the label '<=50K' : 76.07% / 75.22% (without unknowns)
14+
|
15+
| Extraction was done by Barry Becker from the 1994 Census database. A set of
16+
| reasonably clean records was extracted using the following conditions:
17+
| ((AAGE>16) && (AGI>100) && (AFNLWGT>1)&& (HRSWK>0))
18+
|
19+
| Prediction task is to determine whether a person makes over 50K
20+
| a year.
21+
|
22+
| First cited in:
23+
| @inproceedings{kohavi-nbtree,
24+
| author={Ron Kohavi},
25+
| title={Scaling Up the Accuracy of Naive-Bayes Classifiers: a
26+
| Decision-Tree Hybrid},
27+
| booktitle={Proceedings of the Second International Conference on
28+
| Knowledge Discovery and Data Mining},
29+
| year = 1996,
30+
| pages={to appear}}
31+
|
32+
| Error Accuracy reported as follows, after removal of unknowns from
33+
| train/test sets):
34+
| C4.5 : 84.46+-0.30
35+
| Naive-Bayes: 83.88+-0.30
36+
| NBTree : 85.90+-0.28
37+
|
38+
|
39+
| Following algorithms were later run with the following error rates,
40+
| all after removal of unknowns and using the original train/test split.
41+
| All these numbers are straight runs using MLC++ with default values.
42+
|
43+
| Algorithm Error
44+
| -- ---------------- -----
45+
| 1 C4.5 15.54
46+
| 2 C4.5-auto 14.46
47+
| 3 C4.5 rules 14.94
48+
| 4 Voted ID3 (0.6) 15.64
49+
| 5 Voted ID3 (0.8) 16.47
50+
| 6 T2 16.84
51+
| 7 1R 19.54
52+
| 8 NBTree 14.10
53+
| 9 CN2 16.00
54+
| 10 HOODG 14.82
55+
| 11 FSS Naive Bayes 14.05
56+
| 12 IDTM (Decision table) 14.46
57+
| 13 Naive-Bayes 16.12
58+
| 14 Nearest-neighbor (1) 21.42
59+
| 15 Nearest-neighbor (3) 20.35
60+
| 16 OC1 15.04
61+
| 17 Pebls Crashed. Unknown why (bounds WERE increased)
62+
|
63+
| Conversion of original data as follows:
64+
| 1. Discretized agrossincome into two ranges with threshold 50,000.
65+
| 2. Convert U.S. to US to avoid periods.
66+
| 3. Convert Unknown to "?"
67+
| 4. Run MLC++ GenCVFiles to generate data,test.
68+
|
69+
| Description of fnlwgt (final weight)
70+
|
71+
| The weights on the CPS files are controlled to independent estimates of the
72+
| civilian noninstitutional population of the US. These are prepared monthly
73+
| for us by Population Division here at the Census Bureau. We use 3 sets of
74+
| controls.
75+
| These are:
76+
| 1. A single cell estimate of the population 16+ for each state.
77+
| 2. Controls for Hispanic Origin by age and sex.
78+
| 3. Controls by Race, age and sex.
79+
|
80+
| We use all three sets of controls in our weighting program and "rake" through
81+
| them 6 times so that by the end we come back to all the controls we used.
82+
|
83+
| The term estimate refers to population totals derived from CPS by creating
84+
| "weighted tallies" of any specified socio-economic characteristics of the
85+
| population.
86+
|
87+
| People with similar demographic characteristics should have
88+
| similar weights. There is one important caveat to remember
89+
| about this statement. That is that since the CPS sample is
90+
| actually a collection of 51 state samples, each with its own
91+
| probability of selection, the statement only applies within
92+
| state.
93+
94+
95+
>50K, <=50K.
96+
97+
age: continuous.
98+
workclass: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.
99+
fnlwgt: continuous.
100+
education: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.
101+
education-num: continuous.
102+
marital-status: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.
103+
occupation: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.
104+
relationship: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.
105+
race: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black.
106+
sex: Female, Male.
107+
capital-gain: continuous.
108+
capital-loss: continuous.
109+
hours-per-week: continuous.
110+
native-country: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands.
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
import numpy as np
2+
from numpy.testing import assert_array_equal
3+
4+
def test_meandiscrete():
5+
X_test = np.array([[ 0, 2],
6+
[ 3, 5],
7+
[ 6, 8],
8+
[ 9, 11],
9+
[12, 14],
10+
[15, 17],
11+
[18, 20],
12+
[21, 23],
13+
[24, 26],
14+
[27, 29]])
15+
mean_discrete = MeanDiscrete()
16+
mean_discrete.fit(X_test)
17+
assert_array_equal(mean_discrete.mean, np.array([13.5, 15.5]))
18+
X_transformed = mean_discrete.transform(X_test)
19+
X_expected = np.array([[ 0, 0],
20+
[ 0, 0],
21+
[ 0, 0],
22+
[ 0, 0],
23+
[ 0, 0],
24+
[ 1, 1],
25+
[ 1, 1],
26+
[ 1, 1],
27+
[ 1, 1],
28+
[ 1, 1]])
29+
assert_array_equal(X_transformed, X_expected)
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

0 commit comments

Comments
 (0)