Skip to content

Commit e5be06f

Browse files
committed
import pymc as mc -> pm in chapter 3 + typos
1 parent 30b42f8 commit e5be06f

File tree

1 file changed

+34
-34
lines changed

1 file changed

+34
-34
lines changed

Chapter3_MCMC/IntroMCMC.ipynb

Lines changed: 34 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -213,7 +213,7 @@
213213
"plt.title(\"Landscape formed by Uniform priors on $p_1, p_2$.\")\n",
214214
"\n",
215215
"subplot(223)\n",
216-
"plt.contour(X, Y, M * L)\n",
216+
"plt.contour(x, y, M * L)\n",
217217
"im = plt.imshow(M * L, interpolation='none', origin='lower',\n",
218218
" cmap=cm.jet, extent=(0, 5, 0, 5))\n",
219219
"plt.title(\"Landscape warped by %d data observation;\\n Uniform priors on $p_1, p_2$.\" % N)\n",
@@ -226,7 +226,7 @@
226226
"exp_y = stats.expon.pdf(x, loc=0, scale=10)\n",
227227
"M = np.dot(exp_x[:, None], exp_y[None, :])\n",
228228
"\n",
229-
"plt.contour(X, Y, M)\n",
229+
"plt.contour(x, y, M)\n",
230230
"im = plt.imshow(M, interpolation='none', origin='lower',\n",
231231
" cmap=cm.jet, extent=(0, 5, 0, 5))\n",
232232
"plt.scatter(lambda_2_true, lambda_1_true, c=\"k\", s=50, edgecolor=\"none\")\n",
@@ -236,7 +236,7 @@
236236
"\n",
237237
"subplot(224)\n",
238238
"# This is the likelihood times prior, that results in the posterior.\n",
239-
"plt.contour(X, Y, M * L)\n",
239+
"plt.contour(x, y, M * L)\n",
240240
"im = plt.imshow(M * L, interpolation='none', origin='lower',\n",
241241
" cmap=cm.jet, extent=(0, 5, 0, 5))\n",
242242
"\n",
@@ -383,11 +383,11 @@
383383
"cell_type": "code",
384384
"collapsed": false,
385385
"input": [
386-
"import pymc as mc\n",
386+
"import pymc as pm\n",
387387
"\n",
388-
"p = mc.Uniform(\"p\", 0, 1)\n",
388+
"p = pm.Uniform(\"p\", 0, 1)\n",
389389
"\n",
390-
"assignment = mc.Categorical(\"assignment\", [p, 1-p], size=data.shape[0])\n",
390+
"assignment = pm.Categorical(\"assignment\", [p, 1-p], size=data.shape[0])\n",
391391
"print \"prior assignment, with p = %.2f:\" % p.value\n",
392392
"print assignment.value[:10], \"...\""
393393
],
@@ -415,7 +415,7 @@
415415
"\n",
416416
"In PyMC, we can do this in one step by writing:\n",
417417
"\n",
418-
" taus = 1.0/mc.Uniform( \"stds\", 0, 100, size= 2)**2 \n",
418+
" taus = 1.0/pm.Uniform( \"stds\", 0, 100, size= 2)**2 \n",
419419
"\n",
420420
"Notice that we specified `size=2`: we are modeling both $\\tau$s as a single PyMC variable. Note that is does not induce a necessary relationship between the two $\\tau$s, it is simply for succinctness.\n",
421421
"\n",
@@ -426,21 +426,21 @@
426426
"cell_type": "code",
427427
"collapsed": false,
428428
"input": [
429-
"taus = 1.0/mc.Uniform(\"stds\", 0, 100, size=2) ** 2\n",
430-
"centers = mc.Normal(\"centers\", [120, 190], [0.01, 0.01], size=2)\n",
429+
"taus = 1.0/pm.Uniform(\"stds\", 0, 100, size=2) ** 2\n",
430+
"centers = pm.Normal(\"centers\", [120, 190], [0.01, 0.01], size=2)\n",
431431
"\n",
432432
"\"\"\"\n",
433-
"The below deterministic functions map a assignment, in this case 0 or 1,\n",
434-
"to a set of parameters, located in the (1,2) arrays `taus` and `centers.`\n",
433+
"The below deterministic functions map an assignment, in this case 0 or 1,\n",
434+
"to a set of parameters, located in the (1,2) arrays `taus` and `centers`.\n",
435435
"\"\"\"\n",
436436
"\n",
437437
"\n",
438-
"@mc.deterministic\n",
438+
"@pm.deterministic\n",
439439
"def center_i(assignment=assignment, centers=centers):\n",
440440
" return centers[assignment]\n",
441441
"\n",
442442
"\n",
443-
"@mc.deterministic\n",
443+
"@pm.deterministic\n",
444444
"def tau_i(assignment=assignment, taus=taus):\n",
445445
" return taus[assignment]\n",
446446
"\n",
@@ -469,10 +469,10 @@
469469
"collapsed": false,
470470
"input": [
471471
"#and to combine it with the observations:\n",
472-
"observations = mc.Normal(\"obs\", center_i, tau_i, value=data, observed=True)\n",
472+
"observations = pm.Normal(\"obs\", center_i, tau_i, value=data, observed=True)\n",
473473
"\n",
474474
"#below we create a model class\n",
475-
"model = mc.Model([p, assignment, taus, centers])"
475+
"model = pm.Model([p, assignment, taus, centers])"
476476
],
477477
"language": "python",
478478
"metadata": {},
@@ -485,7 +485,7 @@
485485
"source": [
486486
"PyMC has an MCMC class, `MCMC` in the main namespace of PyMC, that implements the MCMC exploring algorithm. We initialize it by passing in a `Model` instance:\n",
487487
"\n",
488-
" mcmc = mc.MCMC( model )\n",
488+
" mcmc = pm.MCMC( model )\n",
489489
"\n",
490490
"The method for asking the `MCMC` to explore the space is `sample( iterations )`, where `iterations` is the number of steps you wish the algorithm to perform. We try 50000 steps below:"
491491
]
@@ -494,7 +494,7 @@
494494
"cell_type": "code",
495495
"collapsed": false,
496496
"input": [
497-
"mcmc = mc.MCMC(model)\n",
497+
"mcmc = pm.MCMC(model)\n",
498498
"mcmc.sample(50000)"
499499
],
500500
"language": "python",
@@ -574,7 +574,7 @@
574574
"3. The traces appear as a random \"walk\" around the space, that is, the paths exhibit correlation with previous positions. This is both good and bad. We will always have correlation between current positions and the previous positions, but too much of it means we are not exploring the space well. This will be detailed in the Diagnostics section later in this chapter.\n",
575575
"\n",
576576
"\n",
577-
"To achieve further convergence, we will perform more MCMC steps. Starting the MCMC again after it has already been called does not mean starting the entire algorithm over. In the pseudo-code algorithm of MCMC above, the only position that matters is the current position (new positions are investigated near the current position), implicitly stored in PyMC variables' `value` attribute. Thus it is fine to halt an MCMC algorithm and inspect its progress, with the intention of starting it up again later. The `value' attributes are not overwritten. \n",
577+
"To achieve further convergence, we will perform more MCMC steps. Starting the MCMC again after it has already been called does not mean starting the entire algorithm over. In the pseudo-code algorithm of MCMC above, the only position that matters is the current position (new positions are investigated near the current position), implicitly stored in PyMC variables' `value` attribute. Thus it is fine to halt an MCMC algorithm and inspect its progress, with the intention of starting it up again later. The `value` attributes are not overwritten. \n",
578578
"\n",
579579
"We will sample the MCMC one hundred thousand more times and visualize the progress below:"
580580
]
@@ -807,12 +807,12 @@
807807
"cell_type": "code",
808808
"collapsed": false,
809809
"input": [
810-
"import pymc as mc\n",
810+
"import pymc as pm\n",
811811
"\n",
812-
"x = mc.Normal(\"x\", 4, 10)\n",
813-
"y = mc.Lambda(\"y\", lambda x=x: 10 - x, trace=True)\n",
812+
"x = pm.Normal(\"x\", 4, 10)\n",
813+
"y = pm.Lambda(\"y\", lambda x=x: 10 - x, trace=True)\n",
814814
"\n",
815-
"ex_mcmc = mc.MCMC(mc.Model([x, y]))\n",
815+
"ex_mcmc = pm.MCMC(pm.Model([x, y]))\n",
816816
"ex_mcmc.sample(500)\n",
817817
"\n",
818818
"plt.plot(ex_mcmc.trace(\"x\")[:])\n",
@@ -930,7 +930,7 @@
930930
"\n",
931931
"Of course, we do not know where the MAP is. PyMC provides an object that will approximate, if not find, the MAP location. In the PyMC main namespace is the `MAP` object that accepts a PyMC `Model` instance. Calling `.fit()` from the `MAP` instance sets the variables in the model to their MAP values.\n",
932932
"\n",
933-
" map_ = mc.MAP( model )\n",
933+
" map_ = pm.MAP( model )\n",
934934
" map_.fit()\n",
935935
"\n",
936936
"The `MAP.fit()` methods has the flexibility of allowing the user to choose which optimization algorithm to use (after all, this is a optimization problem: we are looking for the values that maximize our landscape), as not all optimization algorithms are created equal. The default optimization algorithm in the call to `fit` is scipy's `fmin` algorithm (which attempts to minimize the *negative of the landscape*). An alternative algorithm that is available is Powell's Method, a favourite of PyMC blogger [Abraham Flaxman](http://healthyalgorithms.com/) [1], by calling `fit(method='fmin_powell')`. From my experience, I use the default, but if my convergence is slow or not guaranteed, I experiment with Powell's method. \n",
@@ -943,12 +943,12 @@
943943
"\n",
944944
"It is still a good idea to provide a burn-in period, even if we are using `MAP` prior to calling `MCMC.sample`, just to be safe. We can have PyMC automatically discard the first $n$ samples by specifying the `burn` parameter in the call to `sample`. As one does not know when the chain has fully converged, I like to assign the first *half* of my samples to be discarded, sometimes up to 90% of my samples for longer runs. To continue the clustering example from above, my new code would look something like:\n",
945945
"\n",
946-
" model = mc.Model( [p, assignment, taus, centers ] )\n",
946+
" model = pm.Model( [p, assignment, taus, centers ] )\n",
947947
"\n",
948-
" map_ = mc.MAP( model )\n",
948+
" map_ = pm.MAP( model )\n",
949949
" map_.fit() #stores the fitted variables' values in foo.value\n",
950950
"\n",
951-
" mcmc = mc.MCMC( model )\n",
951+
" mcmc = pm.MCMC( model )\n",
952952
" mcmc.sample( 100000, 50000 )\n"
953953
]
954954
},
@@ -978,12 +978,12 @@
978978
"input": [
979979
"figsize(12.5, 4)\n",
980980
"\n",
981-
"import pymc as mc\n",
982-
"x_t = mc.rnormal(0, 1, 200)\n",
981+
"import pymc as pm\n",
982+
"x_t = pm.rnormal(0, 1, 200)\n",
983983
"x_t[0] = 0\n",
984984
"y_t = np.zeros(200)\n",
985985
"for i in range(1, 200):\n",
986-
" y_t[i] = mc.rnormal(y_t[i - 1], 1)\n",
986+
" y_t[i] = pm.rnormal(y_t[i - 1], 1)\n",
987987
"\n",
988988
"plt.plot(y_t, label=\"$y_t$\", lw=3)\n",
989989
"plt.plot(x_t, label=\"$x_t$\", lw=3)\n",
@@ -1055,7 +1055,7 @@
10551055
"\n",
10561056
"A chain that is [Isn't meandering exploring?] exploring the space well will exhibit very high autocorrelation. Visually, if the trace seems to meander like a river, and not settle down, the chain will have high autocorrelation.\n",
10571057
"\n",
1058-
"This does not imply that a converged MCMC has low autocorrelation. Hence low autocorrelation is not necessary for convergence, but it is sufficient. PyMC has an built-in autocorrelation plotting function in the `Matplot` module. "
1058+
"This does not imply that a converged MCMC has low autocorrelation. Hence low autocorrelation is not necessary for convergence, but it is sufficient. PyMC has a built-in autocorrelation plotting function in the `Matplot` module. "
10591059
]
10601060
},
10611061
{
@@ -1107,7 +1107,7 @@
11071107
"\n",
11081108
"What is a good amount of thinning? The returned samples will always exhibit some autocorrelation, regardless of how much thinning is done. So long as the autocorrelation tends to zero, you are probably ok. Typically thinning of more than 10 is not necessary.\n",
11091109
"\n",
1110-
"PyMC exposes a `thinning` parameter in the call the `sample`, for example: `sample( 10000, burn = 5000, thinning = 5)`. "
1110+
"PyMC exposes a `thinning` parameter in the call to `sample`, for example: `sample( 10000, burn = 5000, thinning = 5)`. "
11111111
]
11121112
},
11131113
{
@@ -1198,9 +1198,9 @@
11981198
"\n",
11991199
"### Intelligent starting values\n",
12001200
"\n",
1201-
"It would be great to start the MCMC algorithm off near the posterior distribution, so that it will take little time to start sampling correctly. We can aid the algorithm by telling where we *think* the posterior distribution will be by specifying the `value` parameter in the `Stochastic` variable creation. In many cases we can produce a reasonable guess for the parameter. For example, if we have data from a Normal distribution, and we wish to estimate the $\\mu$ parameter, then a good starting value would the *mean* of the data. \n",
1201+
"It would be great to start the MCMC algorithm off near the posterior distribution, so that it will take little time to start sampling correctly. We can aid the algorithm by telling where we *think* the posterior distribution will be by specifying the `value` parameter in the `Stochastic` variable creation. In many cases we can produce a reasonable guess for the parameter. For example, if we have data from a Normal distribution, and we wish to estimate the $\\mu$ parameter, then a good starting value would be the *mean* of the data. \n",
12021202
"\n",
1203-
" mu = mc.Uniform( \"mu\", 0, 100, value = data.mean() )\n",
1203+
" mu = pm.Uniform( \"mu\", 0, 100, value = data.mean() )\n",
12041204
"\n",
12051205
"For most parameters in models, there is a frequentist estimate of it. These estimates are a good starting value for our MCMC algorithms. Of course, this is not always possible for some variables, but including as many appropriate initial values is always a good idea. Even if your guesses are wrong, the MCMC will still converge to the proper distribution, so there is little to lose.\n",
12061206
"\n",
@@ -1338,4 +1338,4 @@
13381338
"metadata": {}
13391339
}
13401340
]
1341-
}
1341+
}

0 commit comments

Comments
 (0)