Skip to content

Commit d694058

Browse files
authored
Merge pull request #85 from xitu/v1.7-pro
update v1.7
2 parents 0bce986 + e1d7563 commit d694058

17 files changed

+324
-174
lines changed

about/uses.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@ This section describes some of the current uses of the TensorFlow system.
1818

1919
> If you are using TensorFlow for research, for education, or for production
2020
> usage in some product, we would love to add something about your usage here.
21-
> Please feel free to email us a brief description of how you're using
22-
> TensorFlow, or even better, send us a pull request to add an entry to this
23-
> file.
21+
> Please feel free to [email us](mailto:[email protected]) a brief
22+
> description of how you're using TensorFlow, or even better, send us a
23+
> pull request to add an entry to this file.
2424
2525
* **Deep Speech**
2626
<ul>

api_guides/python/contrib.bayesflow.monte_carlo.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -6,42 +6,42 @@ Monte Carlo integration and helpers.
66
## Background
77

88
Monte Carlo integration refers to the practice of estimating an expectation with
9-
a sample mean. For example, given random variable `Z in R^k` with density `p`,
9+
a sample mean. For example, given random variable `Z in \\(R^k\\)` with density `p`,
1010
the expectation of function `f` can be approximated like:
1111

1212
```
13-
E_p[f(Z)] = \int f(z) p(z) dz
14-
~ S_n
15-
:= n^{-1} \sum_{i=1}^n f(z_i), z_i iid samples from p.
13+
$$E_p[f(Z)] = \int f(z) p(z) dz$$
14+
$$ ~ S_n
15+
:= n^{-1} \sum_{i=1}^n f(z_i), z_i\ iid\ samples\ from\ p.$$
1616
```
1717

18-
If `E_p[|f(Z)|] < infinity`, then `S_n --> E_p[f(Z)]` by the strong law of large
19-
numbers. If `E_p[f(Z)^2] < infinity`, then `S_n` is asymptotically normal with
20-
variance `Var[f(Z)] / n`.
18+
If `\\(E_p[|f(Z)|] < infinity\\)`, then `\\(S_n\\) --> \\(E_p[f(Z)]\\)` by the strong law of large
19+
numbers. If `\\(E_p[f(Z)^2] < infinity\\)`, then `\\(S_n\\)` is asymptotically normal with
20+
variance `\\(Var[f(Z)] / n\\)`.
2121

2222
Practitioners of Bayesian statistics often find themselves wanting to estimate
23-
`E_p[f(Z)]` when the distribution `p` is known only up to a constant. For
23+
`\\(E_p[f(Z)]\\)` when the distribution `p` is known only up to a constant. For
2424
example, the joint distribution `p(z, x)` may be known, but the evidence
25-
`p(x) = \int p(z, x) dz` may be intractable. In that case, a parameterized
26-
distribution family `q_lambda(z)` may be chosen, and the optimal `lambda` is the
27-
one minimizing the KL divergence between `q_lambda(z)` and
28-
`p(z | x)`. We only know `p(z, x)`, but that is sufficient to find `lambda`.
25+
`\\(p(x) = \int p(z, x) dz\\)` may be intractable. In that case, a parameterized
26+
distribution family `\\(q_\lambda(z)\\)` may be chosen, and the optimal `\\(\lambda\\)` is the
27+
one minimizing the KL divergence between `\\(q_\lambda(z)\\)` and
28+
`\\(p(z | x)\\)`. We only know `p(z, x)`, but that is sufficient to find `\\(\lambda\\)`.
2929

3030

3131
## Log-space evaluation and subtracting the maximum
3232

3333
Care must be taken when the random variable lives in a high dimensional space.
34-
For example, the naive importance sample estimate `E_q[f(Z) p(Z) / q(Z)]`
35-
involves the ratio of two terms `p(Z) / q(Z)`, each of which must have tails
36-
dropping off faster than `O(|z|^{-(k + 1)})` in order to have finite integral.
34+
For example, the naive importance sample estimate `\\(E_q[f(Z) p(Z) / q(Z)]\\)`
35+
involves the ratio of two terms `\\(p(Z) / q(Z)\\)`, each of which must have tails
36+
dropping off faster than `\\(O(|z|^{-(k + 1)})\\)` in order to have finite integral.
3737
This ratio would often be zero or infinity up to numerical precision.
3838

3939
For that reason, we write
4040

4141
```
42-
Log E_q[ f(Z) p(Z) / q(Z) ]
43-
= Log E_q[ exp{Log[f(Z)] + Log[p(Z)] - Log[q(Z)] - C} ] + C, where
44-
C := Max[ Log[f(Z)] + Log[p(Z)] - Log[q(Z)] ].
42+
$$Log E_q[ f(Z) p(Z) / q(Z) ]$$
43+
$$ = Log E_q[ \exp\{Log[f(Z)] + Log[p(Z)] - Log[q(Z)] - C\} ] + C,$$ where
44+
$$C := Max[ Log[f(Z)] + Log[p(Z)] - Log[q(Z)] ].$$
4545
```
4646

4747
The maximum value of the exponentiated term will be 0.0, and the expectation

api_guides/python/contrib.distributions.bijectors.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,5 @@ To apply a `Bijector`, use `distributions.TransformedDistribution`.
2828
* @{tf.contrib.distributions.bijectors.Inline}
2929
* @{tf.contrib.distributions.bijectors.Invert}
3030
* @{tf.contrib.distributions.bijectors.PowerTransform}
31-
* @{tf.contrib.distributions.bijectors.SigmoidCentered}
3231
* @{tf.contrib.distributions.bijectors.SoftmaxCentered}
3332
* @{tf.contrib.distributions.bijectors.Softplus}

api_guides/python/contrib.graph_editor.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -61,21 +61,21 @@ A subgraph can be created in several ways:
6161

6262
* using a list of ops:
6363

64-
```python
65-
my_sgv = ge.sgv(ops)
66-
```
64+
```python
65+
my_sgv = ge.sgv(ops)
66+
```
6767

6868
* from a name scope:
6969

70-
```python
71-
my_sgv = ge.sgv_scope("foo/bar", graph=tf.get_default_graph())
72-
```
70+
```python
71+
my_sgv = ge.sgv_scope("foo/bar", graph=tf.get_default_graph())
72+
```
7373

7474
* using regular expression:
7575

76-
```python
77-
my_sgv = ge.sgv("foo/.*/.*read$", graph=tf.get_default_graph())
78-
```
76+
```python
77+
my_sgv = ge.sgv("foo/.*/.*read$", graph=tf.get_default_graph())
78+
```
7979

8080
Note that the Graph Editor is meant to manipulate several graphs at the same
8181
time, typically during transform or copy operation. For that reason,

api_guides/python/contrib.losses.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -107,19 +107,19 @@ weighted average over the individual prediction errors:
107107
loss = tf.contrib.losses.mean_squared_error(predictions, depths, weight)
108108
```
109109

110-
@{tf.contrib.losses.absolute_difference}
111-
@{tf.contrib.losses.add_loss}
112-
@{tf.contrib.losses.hinge_loss}
113-
@{tf.contrib.losses.compute_weighted_loss}
114-
@{tf.contrib.losses.cosine_distance}
115-
@{tf.contrib.losses.get_losses}
116-
@{tf.contrib.losses.get_regularization_losses}
117-
@{tf.contrib.losses.get_total_loss}
118-
@{tf.contrib.losses.log_loss}
119-
@{tf.contrib.losses.mean_pairwise_squared_error}
120-
@{tf.contrib.losses.mean_squared_error}
121-
@{tf.contrib.losses.sigmoid_cross_entropy}
122-
@{tf.contrib.losses.softmax_cross_entropy}
123-
@{tf.contrib.losses.sparse_softmax_cross_entropy}
110+
* @{tf.contrib.losses.absolute_difference}
111+
* @{tf.contrib.losses.add_loss}
112+
* @{tf.contrib.losses.hinge_loss}
113+
* @{tf.contrib.losses.compute_weighted_loss}
114+
* @{tf.contrib.losses.cosine_distance}
115+
* @{tf.contrib.losses.get_losses}
116+
* @{tf.contrib.losses.get_regularization_losses}
117+
* @{tf.contrib.losses.get_total_loss}
118+
* @{tf.contrib.losses.log_loss}
119+
* @{tf.contrib.losses.mean_pairwise_squared_error}
120+
* @{tf.contrib.losses.mean_squared_error}
121+
* @{tf.contrib.losses.sigmoid_cross_entropy}
122+
* @{tf.contrib.losses.softmax_cross_entropy}
123+
* @{tf.contrib.losses.sparse_softmax_cross_entropy}
124124

125125

api_guides/python/io_ops.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
88
## Placeholders
99

1010
TensorFlow provides a placeholder operation that must be fed with data
11-
on execution. For more info, see the section on @{$reading_data#feeding$Feeding data}.
11+
on execution. For more info, see the section on @{$reading_data#Feeding$Feeding data}.
1212

1313
* @{tf.placeholder}
1414
* @{tf.placeholder_with_default}
@@ -42,7 +42,7 @@ formats into tensors.
4242

4343
### Example protocol buffer
4444

45-
TensorFlow's @{$reading_data#standard-tensorflow-format$recommended format for training examples}
45+
TensorFlow's @{$reading_data#standard_tensorflow_format$recommended format for training examples}
4646
is serialized `Example` protocol buffers, [described
4747
here](https://www.tensorflow.org/code/tensorflow/core/example/example.proto).
4848
They contain `Features`, [described

api_guides/python/nn.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ bottom. Note that this is different from existing libraries such as cuDNN and
8989
Caffe, which explicitly specify the number of padded pixels and always pad the
9090
same number of pixels on both sides.
9191

92-
For the `'VALID`' scheme, the output height and width are computed as:
92+
For the `'VALID'` scheme, the output height and width are computed as:
9393

9494
out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
9595
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))
@@ -98,10 +98,10 @@ and no padding is used.
9898

9999
Given the output size and the padding, the output can be computed as
100100

101-
output[b, i, j, :] =
102-
sum_{di, dj} input[b, strides[1] * i + di - pad_top,
103-
strides[2] * j + dj - pad_left, ...] *
104-
filter[di, dj, ...]
101+
$$ output[b, i, j, :] =
102+
sum_{d_i, d_j} input[b, strides[1] * i + d_i - pad_{top},\
103+
strides[2] * j + d_j - pad_{left}, ...] *
104+
filter[d_i, d_j,\ ...]$$
105105

106106
where any value outside the original input image region are considered zero (
107107
i.e. we pad zero values around the border of the image).
@@ -161,12 +161,12 @@ Morphological operators are non-linear filters used in image processing.
161161
](https://en.wikipedia.org/wiki/Dilation_(morphology))
162162
is the max-sum counterpart of standard sum-product convolution:
163163

164-
output[b, y, x, c] =
164+
$$ output[b, y, x, c] =
165165
max_{dy, dx} input[b,
166166
strides[1] * y + rates[1] * dy,
167167
strides[2] * x + rates[2] * dx,
168168
c] +
169-
filter[dy, dx, c]
169+
filter[dy, dx, c]$$
170170

171171
The `filter` is usually called structuring function. Max-pooling is a special
172172
case of greyscale morphological dilation when the filter assumes all-zero
@@ -176,12 +176,12 @@ values (a.k.a. flat structuring function).
176176
](https://en.wikipedia.org/wiki/Erosion_(morphology))
177177
is the min-sum counterpart of standard sum-product convolution:
178178

179-
output[b, y, x, c] =
179+
$$ output[b, y, x, c] =
180180
min_{dy, dx} input[b,
181181
strides[1] * y - rates[1] * dy,
182182
strides[2] * x - rates[2] * dx,
183183
c] -
184-
filter[dy, dx, c]
184+
filter[dy, dx, c]$$
185185

186186
Dilation and erosion are dual to each other. The dilation of the input signal
187187
`f` by the structuring signal `g` is equal to the negation of the erosion of

api_guides/python/state_ops.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,8 @@ automatically by the optimizers in most cases.
8383
* @{tf.scatter_sub}
8484
* @{tf.scatter_mul}
8585
* @{tf.scatter_div}
86+
* @{tf.scatter_min}
87+
* @{tf.scatter_max}
8688
* @{tf.scatter_nd_update}
8789
* @{tf.scatter_nd_add}
8890
* @{tf.scatter_nd_sub}

community/contributing.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
# Contributing to TensorFlow
2+
3+
TensorFlow is an open-source project, and we welcome your participation
4+
and contribution. This page describes how to get involved.
5+
6+
## Repositories
7+
8+
The code for TensorFlow is hosted in the [TensorFlow GitHub
9+
organization](https://github.com/tensorflow). Multiple projects are located
10+
inside the organization, including:
11+
12+
* [TensorFlow](https://github.com/tensorflow/tensorflow)
13+
* [Models](https://github.com/tensorflow/models)
14+
* [TensorBoard](https://github.com/tensorflow/tensorboard)
15+
* [TensorFlow.js](https://github.com/tensorflow/tfjs)
16+
* [TensorFlow Serving](https://github.com/tensorflow/serving)
17+
* [TensorFlow Documentation](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/docs_src)
18+
19+
## Contributor checklist
20+
21+
* Before contributing to TensorFlow source code, please review the [contribution
22+
guidelines](https://github.com/tensorflow/tensorflow/blob/master/CONTRIBUTING.md).
23+
24+
* Join the
25+
[[email protected]](https://groups.google.com/a/tensorflow.org/d/forum/developers)
26+
mailing list, to coordinate and discuss with others contributing to TensorFlow.
27+
28+
* For coding style conventions, read the @{$style_guide$TensorFlow Style Guide}.
29+
30+
* Finally, review @{$documentation$Writing TensorFlow Documentation}, which
31+
explains documentation conventions.
32+
33+
You may also wish to review our guide to @{$benchmarks$defining and running benchmarks}.
34+
35+
## Special Interest Groups
36+
37+
To enable focused collaboration on particular areas of TensorFlow, we host
38+
Special Interest Groups (SIGs). SIGs do their work in public: if you want to
39+
join and contribute, review the work of the group, and get in touch with the
40+
relevant SIG leader. Membership policies vary on a per-SIG basis.
41+
42+
* **SIG Build** focuses on issues surrounding building, packaging, and
43+
distribution of TensorFlow. [Mailing list](https://groups.google.com/a/tensorflow.org/d/forum/build).
44+
45+
* **SIG TensorBoard** furthers the development and direction of TensorBoard and its plugins.
46+
[Mailing list](https://groups.google.com/a/tensorflow.org/d/forum/sig-tensorboard).
47+
48+
* **SIG Rust** collaborates on the development of TensorFlow's Rust bindings.
49+
[Mailing list](https://groups.google.com/a/tensorflow.org/d/forum/rust).

community/documentation.md

Lines changed: 21 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -148,30 +148,18 @@ viewing. Do not include url parameters in the source code URL.
148148
Before building the documentation, you must first set up your environment by
149149
doing the following:
150150

151-
1. If pip isn't installed on your machine, install it now by issuing the
152-
following command:
153-
154-
$ sudo easy_install pip
155-
156-
2. Use pip to install codegen, mock, and pandas by issuing the following
157-
command (Note: If you are using
158-
a [virtualenv](https://virtualenv.pypa.io/en/stable/) to manage your
159-
dependencies, you may not want to use sudo for these installations):
160-
161-
$ sudo pip install codegen mock pandas
162-
163-
3. If bazel is not installed on your machine, install it now. If you are on
151+
1. If bazel is not installed on your machine, install it now. If you are on
164152
Linux, install bazel by issuing the following command:
165153

166154
$ sudo apt-get install bazel # Linux
167155

168156
If you are on Mac OS, find bazel installation instructions on
169157
[this page](https://bazel.build/versions/master/docs/install.html#mac-os-x).
170158

171-
4. Change directory to the top-level `tensorflow` directory of the TensorFlow
159+
2. Change directory to the top-level `tensorflow` directory of the TensorFlow
172160
source code.
173161

174-
5. Run the `configure` script and answer its prompts appropriately for your
162+
3. Run the `configure` script and answer its prompts appropriately for your
175163
system.
176164

177165
$ ./configure
@@ -477,31 +465,29 @@ should use Markdown in the docstring.
477465

478466
Here's a simple example:
479467

480-
```python
481-
def foo(x, y, name="bar"):
482-
"""Computes foo.
468+
def foo(x, y, name="bar"):
469+
"""Computes foo.
483470

484-
Given two 1-D tensors `x` and `y`, this operation computes the foo.
471+
Given two 1-D tensors `x` and `y`, this operation computes the foo.
485472

486-
Example:
473+
Example:
487474

488-
```
489-
# x is [1, 1]
490-
# y is [2, 2]
491-
tf.foo(x, y) ==> [3, 3]
492-
```
493-
Args:
494-
x: A `Tensor` of type `int32`.
495-
y: A `Tensor` of type `int32`.
496-
name: A name for the operation (optional).
475+
```
476+
# x is [1, 1]
477+
# y is [2, 2]
478+
tf.foo(x, y) ==> [3, 3]
479+
```
480+
Args:
481+
x: A `Tensor` of type `int32`.
482+
y: A `Tensor` of type `int32`.
483+
name: A name for the operation (optional).
497484

498-
Returns:
499-
A `Tensor` of type `int32` that is the foo of `x` and `y`.
485+
Returns:
486+
A `Tensor` of type `int32` that is the foo of `x` and `y`.
500487

501-
Raises:
502-
ValueError: If `x` or `y` are not of type `int32`.
503-
"""
504-
```
488+
Raises:
489+
ValueError: If `x` or `y` are not of type `int32`.
490+
"""
505491

506492
## Description of the docstring sections
507493

community/groups.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# User Groups
2+
3+
TensorFlow has communities around the world.
4+
5+
## Asia
6+
7+
* [TensorFlow Korea (TF-KR) User Group](https://www.facebook.com/groups/TensorFlowKR/) _(Korean language)_
8+
* [TensorFlow User Group Tokyo](https://tfug-tokyo.connpass.com/) _(Japanese Language)_
9+
* [Soleil Data Dojo](https://soleildatadojo.connpass.com/) _(Japanese language)_
10+
* [TensorFlow User Group Utsunomiya](https://tfug-utsunomiya.connpass.com/)
11+
12+
13+
## Europe
14+
15+
* [TensorFlow Barcelona](https://www.meetup.com/Barcelona-Machine-Learning-Meetup/)
16+
* [TensorFlow Madrid](https://www.meetup.com/TensorFlow-Madrid/)
17+

0 commit comments

Comments
 (0)