|
155 | 155 | "source": [
|
156 | 156 | "### Type promotion\n",
|
157 | 157 | "\n",
|
158 |
| - "TensorFlow NumPy APIs have well-defined semantics for converting literals to ND array, as well as for performing type promotion on ND array inputs. Please see [`np.result_type`](https://numpy.org/doc/stable/reference/generated/numpy.result_type.html) for more details. . When converting literals to ND array, NumPy prefers wide types like `tnp.int64` and `tnp.float64`.\n", |
| 158 | + "TensorFlow NumPy APIs have well-defined semantics for converting literals to ND array, as well as for performing type promotion on ND array inputs. Please see [`np.result_type`](https://numpy.org/doc/1.16/reference/generated/numpy.result_type.html) for more details. When converting literals to ND array, NumPy prefers wide types like `tnp.int64` and `tnp.float64`.\n", |
159 | 159 | "\n",
|
160 | 160 | "In contrast, `tf.convert_to_tensor` prefers `tf.int32` and `tf.float32` types for converting constants to `tf.Tensor`. TensorFlow APIs leave `tf.Tensor` inputs unchanged and do not perform type promotion on them.\n",
|
161 | 161 | "\n",
|
162 | 162 | "In the next example, you will perform type promotion. First, run addition on ND array inputs of different types and note the output types. None of these type promotions would be allowed on straight `tf.Tensor` objects. Finally,\n",
|
163 |
| - "convert literals to ND array using `ndarray.asarrray` and note the resulting type." |
| 163 | + "convert literals to ND array using `ndarray.asarray` and note the resulting type." |
164 | 164 | ]
|
165 | 165 | },
|
166 | 166 | {
|
|
175 | 175 | "values = [tnp.asarray(1, dtype=d) for d in\n",
|
176 | 176 | " (tnp.int32, tnp.int64, tnp.float32, tnp.float64)]\n",
|
177 | 177 | "for i, v1 in enumerate(values):\n",
|
178 |
| - " for v2 in values[i+1:]:\n", |
| 178 | + " for v2 in values[i + 1:]:\n", |
179 | 179 | " print(\"%s + %s => %s\" % (v1.dtype, v2.dtype, (v1 + v2).dtype))\n",
|
180 | 180 | "\n",
|
181 | 181 | "print(\"Type inference during array creation\")\n",
|
182 | 182 | "print(\"tnp.asarray(1).dtype == tnp.%s\" % tnp.asarray(1).dtype)\n",
|
183 |
| - "print(\"tnp.asarray(1.).dtype == tnp.%s\\n\" % tnp.asarray(1.).dtype)\n" |
| 183 | + "print(\"tnp.asarray(1.).dtype == tnp.%s\\n\" % tnp.asarray(1.).dtype)" |
184 | 184 | ]
|
185 | 185 | },
|
186 | 186 | {
|
|
192 | 192 | "### Broadcasting\n",
|
193 | 193 | "\n",
|
194 | 194 | "Similar to TensorFlow, NumPy defines rich semantics for \"broadcasting\" values.\n",
|
195 |
| - "You can check out the [NumPy broadcasting guide](https://numpy.org/doc/stable/user/basics.broadcasting.html) for more information and compare this with [TensorFlow broadcasting semantics](https://www.tensorflow.org/guide/tensor#broadcasting)." |
| 195 | + "You can check out the [NumPy broadcasting guide](https://numpy.org/doc/1.16/user/basics.broadcasting.html) for more information and compare this with [TensorFlow broadcasting semantics](https://www.tensorflow.org/guide/tensor#broadcasting)." |
196 | 196 | ]
|
197 | 197 | },
|
198 | 198 | {
|
|
218 | 218 | "source": [
|
219 | 219 | "### Indexing\n",
|
220 | 220 | "\n",
|
221 |
| - "NumPy defines very sophisticated indexing rules. See the [NumPy Indexing guide](https://numpy.org/doc/stable/reference/arrays.indexing.html). Note the use of ND arrays as indices below." |
| 221 | + "NumPy defines very sophisticated indexing rules. See the [NumPy Indexing guide](https://numpy.org/doc/1.16/reference/arrays.indexing.html). Note the use of ND arrays as indices below." |
222 | 222 | ]
|
223 | 223 | },
|
224 | 224 | {
|
|
324 | 324 | "\n",
|
325 | 325 | "Similarly, TensorFlow NumPy functions can accept inputs of different types including `tf.Tensor` and `np.ndarray`. These inputs are converted to an ND array by calling `ndarray.asarray` on them. \n",
|
326 | 326 | "\n",
|
327 |
| - "Conversion of the ND array to and from `np.ndarray` may trigger actual data copies. Please see the section on [buffer copies](#Buffer-copies) for more details." |
| 327 | + "Conversion of the ND array to and from `np.ndarray` may trigger actual data copies. Please see the section on [buffer copies](#buffer-copies) for more details." |
328 | 328 | ]
|
329 | 329 | },
|
330 | 330 | {
|
|
367 | 367 | "\n",
|
368 | 368 | "Intermixing TensorFlow NumPy with NumPy code may trigger data copies. This is because TensorFlow NumPy has stricter requirements on memory alignment than those of NumPy.\n",
|
369 | 369 | "\n",
|
370 |
| - "When a `np.ndarray` is passed to TensorFlow Numpy, it will check for alignment requirements and trigger a copy if needed. When passing an ND array CPU buffer to NumPy, generally the buffer will satisfy alignment requirements and NumPy will not need to create a copy.\n", |
| 370 | + "When a `np.ndarray` is passed to TensorFlow NumPy, it will check for alignment requirements and trigger a copy if needed. When passing an ND array CPU buffer to NumPy, generally the buffer will satisfy alignment requirements and NumPy will not need to create a copy.\n", |
371 | 371 | "\n",
|
372 | 372 | "ND arrays can refer to buffers placed on devices other than the local CPU memory. In such cases, invoking a NumPy function will trigger copies across the network or device as needed.\n",
|
373 | 373 | "\n",
|
374 |
| - "Given this, intermixing with NumPy API calls should generally be done with caution and the user should watch out for overheads of copying data. Interleaving TensorFlow NumPy calls with TensorFlow calls is generally safe and avoids copying data. See the section on [tensorflow interoperability](#Tensorflow-interoperability) for more details." |
| 374 | + "Given this, intermixing with NumPy API calls should generally be done with caution and the user should watch out for overheads of copying data. Interleaving TensorFlow NumPy calls with TensorFlow calls is generally safe and avoids copying data. See the section on [TensorFlow interoperability](#tensorflow-interoperability) for more details." |
375 | 375 | ]
|
376 | 376 | },
|
377 | 377 | {
|
|
559 | 559 | "outputs": [],
|
560 | 560 | "source": [
|
561 | 561 | "# Computes a batch of jacobians. Each row is the jacobian of an element in the\n",
|
562 |
| - "# batch of outputs w.r.t the corresponding input batch element.\n", |
| 562 | + "# batch of outputs w.r.t. the corresponding input batch element.\n", |
563 | 563 | "def prediction_batch_jacobian(inputs):\n",
|
564 | 564 | " with tf.GradientTape() as tape:\n",
|
565 | 565 | " tape.watch(inputs)\n",
|
|
581 | 581 | "source": [
|
582 | 582 | "### Trace compilation: tf.function\n",
|
583 | 583 | "\n",
|
584 |
| - "Tensorflow's `tf.function` works by \"trace compiling\" the code and then optimizing these traces for much faster performance. See the [Introduction to Graphs and Functions](./intro_to_graphs.ipynb).\n", |
| 584 | + "TensorFlow's `tf.function` works by \"trace compiling\" the code and then optimizing these traces for much faster performance. See the [Introduction to Graphs and Functions](./intro_to_graphs.ipynb).\n", |
585 | 585 | "\n",
|
586 | 586 | "`tf.function` can be used to optimize TensorFlow NumPy code as well. Here is a simple example to demonstrate the speedups. Note that the body of `tf.function` code includes calls to TensorFlow NumPy APIs, and the inputs and output are ND arrays.\n"
|
587 | 587 | ]
|
|
598 | 598 | "print(\"Eager performance\")\n",
|
599 | 599 | "compute_gradients(model, inputs, labels)\n",
|
600 | 600 | "print(timeit.timeit(lambda: compute_gradients(model, inputs, labels),\n",
|
601 |
| - " number=10)* 100, \"ms\")\n", |
| 601 | + " number=10) * 100, \"ms\")\n", |
602 | 602 | "\n",
|
603 | 603 | "print(\"\\ntf.function compiled performance\")\n",
|
604 | 604 | "compiled_compute_gradients = tf.function(compute_gradients)\n",
|
|
665 | 665 | "def unvectorized_per_example_gradients(inputs, labels):\n",
|
666 | 666 | " def single_example_gradient(arg):\n",
|
667 | 667 | " inp, label = arg\n",
|
668 |
| - " output = compute_gradients(model,\n", |
669 |
| - " tnp.expand_dims(inp, 0),\n", |
670 |
| - " tnp.expand_dims(label, 0))\n", |
671 |
| - " return output\n", |
| 668 | + " return compute_gradients(model,\n", |
| 669 | + " tnp.expand_dims(inp, 0),\n", |
| 670 | + " tnp.expand_dims(label, 0))\n", |
672 | 671 | "\n",
|
673 | 672 | " return tf.map_fn(single_example_gradient, (inputs, labels),\n",
|
674 | 673 | " fn_output_signature=(tf.float32, tf.float32, tf.float32))\n",
|
675 | 674 | "\n",
|
676 |
| - "print(\"Running vectorized computaton\")\n", |
| 675 | + "print(\"Running vectorized computation\")\n", |
677 | 676 | "print(timeit.timeit(lambda: vectorized_per_example_gradients(inputs, labels),\n",
|
678 | 677 | " number=10) * 100, \"ms\")\n",
|
679 | 678 | "\n",
|
680 | 679 | "print(\"\\nRunning unvectorized computation\")\n",
|
681 | 680 | "per_example_gradients = unvectorized_per_example_gradients(inputs, labels)\n",
|
682 | 681 | "print(timeit.timeit(lambda: unvectorized_per_example_gradients(inputs, labels),\n",
|
683 |
| - " number=5) * 200, \"ms\")" |
| 682 | + " number=10) * 100, \"ms\")" |
684 | 683 | ]
|
685 | 684 | },
|
686 | 685 | {
|
|
791 | 790 | "\n",
|
792 | 791 | "However TensorFlow has higher overheads for dispatching operations compared to NumPy. For workloads composed of small operations (less than about 10 microseconds), these overheads can dominate the runtime and NumPy could provide better performance. For other cases, TensorFlow should generally provide better performance.\n",
|
793 | 792 | "\n",
|
794 |
| - "Run the benchmark below to compare NumPy and TensorFlow Numpy performance for different input sizes." |
| 793 | + "Run the benchmark below to compare NumPy and TensorFlow NumPy performance for different input sizes." |
795 | 794 | ]
|
796 | 795 | },
|
797 | 796 | {
|
|
855 | 854 | "def compiled_tnp_sigmoid(y):\n",
|
856 | 855 | " return tnp_sigmoid(y)\n",
|
857 | 856 | "\n",
|
858 |
| - "sizes = (2**0, 2 ** 5, 2 ** 10, 2 ** 15, 2 ** 20)\n", |
| 857 | + "sizes = (2 ** 0, 2 ** 5, 2 ** 10, 2 ** 15, 2 ** 20)\n", |
859 | 858 | "np_inputs = [np.random.randn(size).astype(np.float32) for size in sizes]\n",
|
860 | 859 | "np_times = benchmark(np_sigmoid, np_inputs)\n",
|
861 | 860 | "\n",
|
|
885 | 884 | "- [TensorFlow NumPy: Distributed Image Classification Tutorial](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/numpy_ops/g3doc/TensorFlow_Numpy_Distributed_Image_Classification.ipynb)\n",
|
886 | 885 | "- [TensorFlow NumPy: Keras and Distribution Strategy](\n",
|
887 | 886 | " https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/numpy_ops/g3doc/TensorFlow_NumPy_Keras_and_Distribution_Strategy.ipynb)\n",
|
888 |
| - "- [Sentiment Analysis with Trax and TensorFlow Numpy](\n", |
| 887 | + "- [Sentiment Analysis with Trax and TensorFlow NumPy](\n", |
889 | 888 | " https://github.com/google/trax/blob/master/trax/tf_numpy_and_keras.ipynb)"
|
890 | 889 | ]
|
891 | 890 | }
|
|
0 commit comments