Skip to content

update v1.7 #85

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 13, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions about/uses.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ This section describes some of the current uses of the TensorFlow system.

> If you are using TensorFlow for research, for education, or for production
> usage in some product, we would love to add something about your usage here.
> Please feel free to email us a brief description of how you're using
> TensorFlow, or even better, send us a pull request to add an entry to this
> file.
> Please feel free to [email us](mailto:[email protected]) a brief
> description of how you're using TensorFlow, or even better, send us a
> pull request to add an entry to this file.

* **Deep Speech**
<ul>
Expand Down
36 changes: 18 additions & 18 deletions api_guides/python/contrib.bayesflow.monte_carlo.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,42 +6,42 @@ Monte Carlo integration and helpers.
## Background

Monte Carlo integration refers to the practice of estimating an expectation with
a sample mean. For example, given random variable `Z in R^k` with density `p`,
a sample mean. For example, given random variable `Z in \\(R^k\\)` with density `p`,
the expectation of function `f` can be approximated like:

```
E_p[f(Z)] = \int f(z) p(z) dz
~ S_n
:= n^{-1} \sum_{i=1}^n f(z_i), z_i iid samples from p.
$$E_p[f(Z)] = \int f(z) p(z) dz$$
$$ ~ S_n
:= n^{-1} \sum_{i=1}^n f(z_i), z_i\ iid\ samples\ from\ p.$$
```

If `E_p[|f(Z)|] < infinity`, then `S_n --> E_p[f(Z)]` by the strong law of large
numbers. If `E_p[f(Z)^2] < infinity`, then `S_n` is asymptotically normal with
variance `Var[f(Z)] / n`.
If `\\(E_p[|f(Z)|] < infinity\\)`, then `\\(S_n\\) --> \\(E_p[f(Z)]\\)` by the strong law of large
numbers. If `\\(E_p[f(Z)^2] < infinity\\)`, then `\\(S_n\\)` is asymptotically normal with
variance `\\(Var[f(Z)] / n\\)`.

Practitioners of Bayesian statistics often find themselves wanting to estimate
`E_p[f(Z)]` when the distribution `p` is known only up to a constant. For
`\\(E_p[f(Z)]\\)` when the distribution `p` is known only up to a constant. For
example, the joint distribution `p(z, x)` may be known, but the evidence
`p(x) = \int p(z, x) dz` may be intractable. In that case, a parameterized
distribution family `q_lambda(z)` may be chosen, and the optimal `lambda` is the
one minimizing the KL divergence between `q_lambda(z)` and
`p(z | x)`. We only know `p(z, x)`, but that is sufficient to find `lambda`.
`\\(p(x) = \int p(z, x) dz\\)` may be intractable. In that case, a parameterized
distribution family `\\(q_\lambda(z)\\)` may be chosen, and the optimal `\\(\lambda\\)` is the
one minimizing the KL divergence between `\\(q_\lambda(z)\\)` and
`\\(p(z | x)\\)`. We only know `p(z, x)`, but that is sufficient to find `\\(\lambda\\)`.


## Log-space evaluation and subtracting the maximum

Care must be taken when the random variable lives in a high dimensional space.
For example, the naive importance sample estimate `E_q[f(Z) p(Z) / q(Z)]`
involves the ratio of two terms `p(Z) / q(Z)`, each of which must have tails
dropping off faster than `O(|z|^{-(k + 1)})` in order to have finite integral.
For example, the naive importance sample estimate `\\(E_q[f(Z) p(Z) / q(Z)]\\)`
involves the ratio of two terms `\\(p(Z) / q(Z)\\)`, each of which must have tails
dropping off faster than `\\(O(|z|^{-(k + 1)})\\)` in order to have finite integral.
This ratio would often be zero or infinity up to numerical precision.

For that reason, we write

```
Log E_q[ f(Z) p(Z) / q(Z) ]
= Log E_q[ exp{Log[f(Z)] + Log[p(Z)] - Log[q(Z)] - C} ] + C, where
C := Max[ Log[f(Z)] + Log[p(Z)] - Log[q(Z)] ].
$$Log E_q[ f(Z) p(Z) / q(Z) ]$$
$$ = Log E_q[ \exp\{Log[f(Z)] + Log[p(Z)] - Log[q(Z)] - C\} ] + C,$$ where
$$C := Max[ Log[f(Z)] + Log[p(Z)] - Log[q(Z)] ].$$
```

The maximum value of the exponentiated term will be 0.0, and the expectation
Expand Down
1 change: 0 additions & 1 deletion api_guides/python/contrib.distributions.bijectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,5 @@ To apply a `Bijector`, use `distributions.TransformedDistribution`.
* @{tf.contrib.distributions.bijectors.Inline}
* @{tf.contrib.distributions.bijectors.Invert}
* @{tf.contrib.distributions.bijectors.PowerTransform}
* @{tf.contrib.distributions.bijectors.SigmoidCentered}
* @{tf.contrib.distributions.bijectors.SoftmaxCentered}
* @{tf.contrib.distributions.bijectors.Softplus}
18 changes: 9 additions & 9 deletions api_guides/python/contrib.graph_editor.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,21 +61,21 @@ A subgraph can be created in several ways:

* using a list of ops:

```python
my_sgv = ge.sgv(ops)
```
```python
my_sgv = ge.sgv(ops)
```

* from a name scope:

```python
my_sgv = ge.sgv_scope("foo/bar", graph=tf.get_default_graph())
```
```python
my_sgv = ge.sgv_scope("foo/bar", graph=tf.get_default_graph())
```

* using regular expression:

```python
my_sgv = ge.sgv("foo/.*/.*read$", graph=tf.get_default_graph())
```
```python
my_sgv = ge.sgv("foo/.*/.*read$", graph=tf.get_default_graph())
```

Note that the Graph Editor is meant to manipulate several graphs at the same
time, typically during transform or copy operation. For that reason,
Expand Down
28 changes: 14 additions & 14 deletions api_guides/python/contrib.losses.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,19 +107,19 @@ weighted average over the individual prediction errors:
loss = tf.contrib.losses.mean_squared_error(predictions, depths, weight)
```

@{tf.contrib.losses.absolute_difference}
@{tf.contrib.losses.add_loss}
@{tf.contrib.losses.hinge_loss}
@{tf.contrib.losses.compute_weighted_loss}
@{tf.contrib.losses.cosine_distance}
@{tf.contrib.losses.get_losses}
@{tf.contrib.losses.get_regularization_losses}
@{tf.contrib.losses.get_total_loss}
@{tf.contrib.losses.log_loss}
@{tf.contrib.losses.mean_pairwise_squared_error}
@{tf.contrib.losses.mean_squared_error}
@{tf.contrib.losses.sigmoid_cross_entropy}
@{tf.contrib.losses.softmax_cross_entropy}
@{tf.contrib.losses.sparse_softmax_cross_entropy}
* @{tf.contrib.losses.absolute_difference}
* @{tf.contrib.losses.add_loss}
* @{tf.contrib.losses.hinge_loss}
* @{tf.contrib.losses.compute_weighted_loss}
* @{tf.contrib.losses.cosine_distance}
* @{tf.contrib.losses.get_losses}
* @{tf.contrib.losses.get_regularization_losses}
* @{tf.contrib.losses.get_total_loss}
* @{tf.contrib.losses.log_loss}
* @{tf.contrib.losses.mean_pairwise_squared_error}
* @{tf.contrib.losses.mean_squared_error}
* @{tf.contrib.losses.sigmoid_cross_entropy}
* @{tf.contrib.losses.softmax_cross_entropy}
* @{tf.contrib.losses.sparse_softmax_cross_entropy}


4 changes: 2 additions & 2 deletions api_guides/python/io_ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
## Placeholders

TensorFlow provides a placeholder operation that must be fed with data
on execution. For more info, see the section on @{$reading_data#feeding$Feeding data}.
on execution. For more info, see the section on @{$reading_data#Feeding$Feeding data}.

* @{tf.placeholder}
* @{tf.placeholder_with_default}
Expand Down Expand Up @@ -42,7 +42,7 @@ formats into tensors.

### Example protocol buffer

TensorFlow's @{$reading_data#standard-tensorflow-format$recommended format for training examples}
TensorFlow's @{$reading_data#standard_tensorflow_format$recommended format for training examples}
is serialized `Example` protocol buffers, [described
here](https://www.tensorflow.org/code/tensorflow/core/example/example.proto).
They contain `Features`, [described
Expand Down
18 changes: 9 additions & 9 deletions api_guides/python/nn.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ bottom. Note that this is different from existing libraries such as cuDNN and
Caffe, which explicitly specify the number of padded pixels and always pad the
same number of pixels on both sides.

For the `'VALID`' scheme, the output height and width are computed as:
For the `'VALID'` scheme, the output height and width are computed as:

out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))
Expand All @@ -98,10 +98,10 @@ and no padding is used.

Given the output size and the padding, the output can be computed as

output[b, i, j, :] =
sum_{di, dj} input[b, strides[1] * i + di - pad_top,
strides[2] * j + dj - pad_left, ...] *
filter[di, dj, ...]
$$ output[b, i, j, :] =
sum_{d_i, d_j} input[b, strides[1] * i + d_i - pad_{top},\
strides[2] * j + d_j - pad_{left}, ...] *
filter[d_i, d_j,\ ...]$$

where any value outside the original input image region are considered zero (
i.e. we pad zero values around the border of the image).
Expand Down Expand Up @@ -161,12 +161,12 @@ Morphological operators are non-linear filters used in image processing.
](https://en.wikipedia.org/wiki/Dilation_(morphology))
is the max-sum counterpart of standard sum-product convolution:

output[b, y, x, c] =
$$ output[b, y, x, c] =
max_{dy, dx} input[b,
strides[1] * y + rates[1] * dy,
strides[2] * x + rates[2] * dx,
c] +
filter[dy, dx, c]
filter[dy, dx, c]$$

The `filter` is usually called structuring function. Max-pooling is a special
case of greyscale morphological dilation when the filter assumes all-zero
Expand All @@ -176,12 +176,12 @@ values (a.k.a. flat structuring function).
](https://en.wikipedia.org/wiki/Erosion_(morphology))
is the min-sum counterpart of standard sum-product convolution:

output[b, y, x, c] =
$$ output[b, y, x, c] =
min_{dy, dx} input[b,
strides[1] * y - rates[1] * dy,
strides[2] * x - rates[2] * dx,
c] -
filter[dy, dx, c]
filter[dy, dx, c]$$

Dilation and erosion are dual to each other. The dilation of the input signal
`f` by the structuring signal `g` is equal to the negation of the erosion of
Expand Down
2 changes: 2 additions & 0 deletions api_guides/python/state_ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,8 @@ automatically by the optimizers in most cases.
* @{tf.scatter_sub}
* @{tf.scatter_mul}
* @{tf.scatter_div}
* @{tf.scatter_min}
* @{tf.scatter_max}
* @{tf.scatter_nd_update}
* @{tf.scatter_nd_add}
* @{tf.scatter_nd_sub}
Expand Down
49 changes: 49 additions & 0 deletions community/contributing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Contributing to TensorFlow

TensorFlow is an open-source project, and we welcome your participation
and contribution. This page describes how to get involved.

## Repositories

The code for TensorFlow is hosted in the [TensorFlow GitHub
organization](https://github.com/tensorflow). Multiple projects are located
inside the organization, including:

* [TensorFlow](https://github.com/tensorflow/tensorflow)
* [Models](https://github.com/tensorflow/models)
* [TensorBoard](https://github.com/tensorflow/tensorboard)
* [TensorFlow.js](https://github.com/tensorflow/tfjs)
* [TensorFlow Serving](https://github.com/tensorflow/serving)
* [TensorFlow Documentation](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/docs_src)

## Contributor checklist

* Before contributing to TensorFlow source code, please review the [contribution
guidelines](https://github.com/tensorflow/tensorflow/blob/master/CONTRIBUTING.md).

* Join the
[[email protected]](https://groups.google.com/a/tensorflow.org/d/forum/developers)
mailing list, to coordinate and discuss with others contributing to TensorFlow.

* For coding style conventions, read the @{$style_guide$TensorFlow Style Guide}.

* Finally, review @{$documentation$Writing TensorFlow Documentation}, which
explains documentation conventions.

You may also wish to review our guide to @{$benchmarks$defining and running benchmarks}.

## Special Interest Groups

To enable focused collaboration on particular areas of TensorFlow, we host
Special Interest Groups (SIGs). SIGs do their work in public: if you want to
join and contribute, review the work of the group, and get in touch with the
relevant SIG leader. Membership policies vary on a per-SIG basis.

* **SIG Build** focuses on issues surrounding building, packaging, and
distribution of TensorFlow. [Mailing list](https://groups.google.com/a/tensorflow.org/d/forum/build).

* **SIG TensorBoard** furthers the development and direction of TensorBoard and its plugins.
[Mailing list](https://groups.google.com/a/tensorflow.org/d/forum/sig-tensorboard).

* **SIG Rust** collaborates on the development of TensorFlow's Rust bindings.
[Mailing list](https://groups.google.com/a/tensorflow.org/d/forum/rust).
56 changes: 21 additions & 35 deletions community/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,30 +148,18 @@ viewing. Do not include url parameters in the source code URL.
Before building the documentation, you must first set up your environment by
doing the following:

1. If pip isn't installed on your machine, install it now by issuing the
following command:

$ sudo easy_install pip

2. Use pip to install codegen, mock, and pandas by issuing the following
command (Note: If you are using
a [virtualenv](https://virtualenv.pypa.io/en/stable/) to manage your
dependencies, you may not want to use sudo for these installations):

$ sudo pip install codegen mock pandas

3. If bazel is not installed on your machine, install it now. If you are on
1. If bazel is not installed on your machine, install it now. If you are on
Linux, install bazel by issuing the following command:

$ sudo apt-get install bazel # Linux

If you are on Mac OS, find bazel installation instructions on
[this page](https://bazel.build/versions/master/docs/install.html#mac-os-x).

4. Change directory to the top-level `tensorflow` directory of the TensorFlow
2. Change directory to the top-level `tensorflow` directory of the TensorFlow
source code.

5. Run the `configure` script and answer its prompts appropriately for your
3. Run the `configure` script and answer its prompts appropriately for your
system.

$ ./configure
Expand Down Expand Up @@ -477,31 +465,29 @@ should use Markdown in the docstring.

Here's a simple example:

```python
def foo(x, y, name="bar"):
"""Computes foo.
def foo(x, y, name="bar"):
"""Computes foo.

Given two 1-D tensors `x` and `y`, this operation computes the foo.
Given two 1-D tensors `x` and `y`, this operation computes the foo.

Example:
Example:

```
# x is [1, 1]
# y is [2, 2]
tf.foo(x, y) ==> [3, 3]
```
Args:
x: A `Tensor` of type `int32`.
y: A `Tensor` of type `int32`.
name: A name for the operation (optional).
```
# x is [1, 1]
# y is [2, 2]
tf.foo(x, y) ==> [3, 3]
```
Args:
x: A `Tensor` of type `int32`.
y: A `Tensor` of type `int32`.
name: A name for the operation (optional).

Returns:
A `Tensor` of type `int32` that is the foo of `x` and `y`.
Returns:
A `Tensor` of type `int32` that is the foo of `x` and `y`.

Raises:
ValueError: If `x` or `y` are not of type `int32`.
"""
```
Raises:
ValueError: If `x` or `y` are not of type `int32`.
"""

## Description of the docstring sections

Expand Down
17 changes: 17 additions & 0 deletions community/groups.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# User Groups

TensorFlow has communities around the world.

## Asia

* [TensorFlow Korea (TF-KR) User Group](https://www.facebook.com/groups/TensorFlowKR/) _(Korean language)_
* [TensorFlow User Group Tokyo](https://tfug-tokyo.connpass.com/) _(Japanese Language)_
* [Soleil Data Dojo](https://soleildatadojo.connpass.com/) _(Japanese language)_
* [TensorFlow User Group Utsunomiya](https://tfug-utsunomiya.connpass.com/)


## Europe

* [TensorFlow Barcelona](https://www.meetup.com/Barcelona-Machine-Learning-Meetup/)
* [TensorFlow Madrid](https://www.meetup.com/TensorFlow-Madrid/)

Loading