Skip to content

Commit ccad396

Browse files
authored
Link fix3 (#1524)
* Fix links Signed-off-by: Chris Abraham <[email protected]> * more link fixes Signed-off-by: Chris Abraham <[email protected]> * fix links Signed-off-by: Chris Abraham <[email protected]> * fix links Signed-off-by: Chris Abraham <[email protected]> * removed broken link Signed-off-by: Chris Abraham <[email protected]> * fix formatting Signed-off-by: Chris Abraham <[email protected]> --------- Signed-off-by: Chris Abraham <[email protected]>
1 parent bebcb9a commit ccad396

File tree

47 files changed

+81
-124
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+81
-124
lines changed

_data/ecosystem/pted/2021/posters.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
are provided as a Torch tensor with a defined gradient. We highlight how this
1111
functionality can be used to explore new paradigms in machine learning, including
1212
the use of hybrid models for transfer learning.
13-
link: http://www.pennylane.ai
13+
link: http://pennylane.ai
1414
poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/K1.png
1515
section: K1
1616
thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-K1.png
@@ -321,7 +321,7 @@
321321
supports accelerated mixed precision training. AMD also provides hardware support
322322
for the PyTorch community build to help develop and maintain new features. This
323323
poster will highlight some of the work that has gone into enabling PyTorch support.
324-
link: www.amd.com/rocm
324+
link: https://www.amd.com/rocm
325325
poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/K8.png
326326
section: K8
327327
thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-K8.png

_mobile/android.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(bitmap,
9494
TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB);
9595
```
9696
`org.pytorch.torchvision.TensorImageUtils` is part of `org.pytorch:pytorch_android_torchvision` library.
97-
The `TensorImageUtils#bitmapToFloat32Tensor` method creates tensors in the [torchvision format](https://pytorch.org/docs/stable/torchvision/models.html) using `android.graphics.Bitmap` as a source.
97+
The `TensorImageUtils#bitmapToFloat32Tensor` method creates tensors in the [torchvision format](https://pytorch.org/vision/stable/models.html) using `android.graphics.Bitmap` as a source.
9898

9999
> All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224.
100100
> The images have to be loaded in to a range of `[0, 1]` and then normalized using `mean = [0.485, 0.456, 0.406]` and `std = [0.229, 0.224, 0.225]`

_mobile/ios.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ HelloWorld is a simple image classification application that demonstrates how to
2323

2424
### Model Preparation
2525

26-
Let's start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don't, we are going to use a pre-trained image classification model - [MobileNet v2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/), which is already packaged in [TorchVision](https://pytorch.org/docs/stable/torchvision/index.html). To install it, run the command below.
26+
Let's start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don't, we are going to use a pre-trained image classification model - [MobileNet v2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/), which is already packaged in [TorchVision](https://pytorch.org/vision/stable/index.html). To install it, run the command below.
2727

2828
> We highly recommend following the [Pytorch Github page](https://github.com/pytorch/pytorch) to set up the Python development environment on your local machine.
2929

_posts/2018-03-5-tensor-comprehensions.md

+4-6
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ conda install -c pytorch -c tensorcomp tensor_comprehensions
3434

3535
At this time we only provide Linux-64 binaries which have been tested on Ubuntu 16.04 and CentOS7.
3636

37-
TC depends on heavyweight C++ projects such as [Halide](http://halide-lang.org/), [Tapir-LLVM](https://github.com/wsmoses/Tapir-LLVM) and [ISL](http://isl.gforge.inria.fr/). Hence, we rely on Anaconda to distribute these dependencies reliably. For the same reason, TC is not available via PyPI.
37+
TC depends on heavyweight C++ projects such as [Halide](http://halide-lang.org/), [Tapir-LLVM](https://github.com/wsmoses/Tapir-LLVM) and ISL. Hence, we rely on Anaconda to distribute these dependencies reliably. For the same reason, TC is not available via PyPI.
3838

3939
#### 2. Import the python package
4040

@@ -74,8 +74,6 @@ The autotuner is your biggest friend. You generally do not want to use a `tc` fu
7474

7575
When the autotuning is running, the current best performance is displayed. If you are satisfied with the current result or you are out of time, stop the tuning procedure by pressing `Ctrl+C`.
7676

77-
![tc-autotuner](https://pytorch.org/static/img/tc_autotuner.gif)
78-
7977
`cache` saves the results of the autotuned kernel search and saves it to the file `fcrelu_100_128_100.tc`. The next time you call the same line of code, it loads the results of the autotuning without recomputing it.
8078

8179
The autotuner has a few hyperparameters (just like your ConvNet has learning rate, number of layers, etc.). We pick reasonable defaults, but you can read about using advanced options [here](https://facebookresearch.github.io/TensorComprehensions/framework/pytorch_integration/writing_layers.html#specifying-mapping-options).
@@ -146,7 +144,7 @@ Note: the syntax for passing in scalars is subject to change in the next release
146144

147145
## torch.nn layers
148146

149-
We added some sugar-coating around the basic PyTorch integration of TC to make it easy to integrate TC into larger `torch.nn` models by defining the forward and backward TC expressions and taking `Variable` inputs / outputs. Here is an [example](https://github.com/facebookresearch/TensorComprehensions/blob/master/test_python/layers/test_convolution_train.py) of defining a convolution layer with TC.
147+
We added some sugar-coating around the basic PyTorch integration of TC to make it easy to integrate TC into larger `torch.nn` models by defining the forward and backward TC expressions and taking `Variable` inputs / outputs.
150148

151149
## Some essentials that you will miss (we're working on them)
152150

@@ -183,12 +181,12 @@ You cannot write this operation in TC: `torch.matmul(...).view(...).mean(...)`.
183181
## Getting Started
184182

185183
- [Walk through Tutorial](https://facebookresearch.github.io/TensorComprehensions/framework/pytorch_integration/writing_layers.html) to quickly get started with understanding and using Tensor Comprehensions PyTorch package.
186-
- [Over 20 examples](https://github.com/facebookresearch/TensorComprehensions/tree/master/test_python/layers) of various ML layers with TC, including `avgpool`, `maxpool`, `matmul`, matmul - give output buffers and `batch-matmul`, `convolution`, `strided-convolution`, `batchnorm`, `copy`, `cosine similarity`, `Linear`, `Linear + ReLU`, `group-convolutions`, strided `group-convolutions`, `indexing`, `Embedding` (lookup table), small-mobilenet, `softmax`, `tensordot`, `transpose`
184+
- Over 20 examples of various ML layers with TC, including `avgpool`, `maxpool`, `matmul`, matmul - give output buffers and `batch-matmul`, `convolution`, `strided-convolution`, `batchnorm`, `copy`, `cosine similarity`, `Linear`, `Linear + ReLU`, `group-convolutions`, strided `group-convolutions`, `indexing`, `Embedding` (lookup table), small-mobilenet, `softmax`, `tensordot`, `transpose`
187185
- [Detailed docs](https://facebookresearch.github.io/TensorComprehensions/framework/pytorch_integration/getting_started.html) on Tensor Comprehensions and integration with PyTorch.
188186

189187
## Communication
190188

191-
- [Slack](https://tensorcomprehensions.herokuapp.com/): For discussion around framework integration, build support, collaboration, etc. join our slack channel.
189+
- Slack: For discussion around framework integration, build support, collaboration, etc. join our slack channel.
192190
193191
- [GitHub](https://github.com/facebookresearch/TensorComprehensions): bug reports, feature requests, install issues, RFCs, thoughts, etc.
194192

_posts/2019-05-08-model-serving-in-pyorch.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ If you can't use the cloud or prefer to manage all services using the same techn
5252

5353
If you want to manage multiple models within a non-cloud service solution, there are teams developing PyTorch support in model servers like [MLFlow](https://mlflow.org/), [Kubeflow](https://www.kubeflow.org/), and [RedisAI.](https://oss.redislabs.com/redisai/) We're excited to see innovation from multiple teams building OSS model servers, and we'll continue to highlight innovation in the PyTorch ecosystem in the future.
5454

55-
If you can use the cloud for your application, there are several great choices for working with models in the cloud. For AWS Sagemaker, you can start find a guide to [all of the resources from AWS for working with PyTorch](https://docs.aws.amazon.com/sagemaker/latest/dg/pytorch.html), including docs on how to use the [Sagemaker Python SDK](https://sagemaker.readthedocs.io/en/stable/using_pytorch.html). You can also see [some](https://youtu.be/5h1Ot2dPi2E) [talks](https://youtu.be/qc5ZikKw9_w) we've given on using PyTorch on Sagemaker. Finally, if you happen to be using PyTorch via FastAI, then they've written a [really simple guide](https://course.fast.ai/deployment_amzn_sagemaker.html) to getting up and running on Sagemaker.
55+
If you can use the cloud for your application, there are several great choices for working with models in the cloud. For AWS Sagemaker, you can start find a guide to [all of the resources from AWS for working with PyTorch](https://docs.aws.amazon.com/sagemaker/latest/dg/pytorch.html), including docs on how to use the [Sagemaker Python SDK](https://sagemaker.readthedocs.io/en/stable/using_pytorch.html). You can also see [some](https://youtu.be/5h1Ot2dPi2E) [talks](https://youtu.be/qc5ZikKw9_w) we've given on using PyTorch on Sagemaker. Finally, if you happen to be using PyTorch via FastAI, then they've written a really simple guide to getting up and running on Sagemaker.
5656

5757
The story is similar across other major clouds. On Google Cloud, you can follow [these instructions](https://cloud.google.com/deep-learning-vm/docs/pytorch_start_instance) to get access to a Deep Learning VM with PyTorch pre-installed. On Microsoft Azure, you have a number of ways to get started from [Azure Machine Learning Service](https://azure.microsoft.com/en-us/services/machine-learning-service/) to [Azure Notebooks](https://notebooks.azure.com/pytorch/projects/tutorials) showing how to use PyTorch.
5858

_posts/2019-06-10-towards-reproducible-research-with-pytorch-hub.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ Users can list all available entrypoints in a repo using the ```torch.hub.list()
106106
'vgg19_bn']
107107
```
108108

109-
Note that PyTorch Hub also allows auxillary entrypoints (other than pretrained models), e.g. ```bertTokenizer``` for preprocessing in the [BERT](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_bert/) models, to make the user workflow smoother.
109+
Note that PyTorch Hub also allows auxillary entrypoints (other than pretrained models), e.g. ```bertTokenizer``` for preprocessing in the BERT models, to make the user workflow smoother.
110110

111111

112112
### Load a model
@@ -164,7 +164,7 @@ forward(input_ids, token_type_ids=None, attention_mask=None, masked_lm_labels=No
164164
...
165165
```
166166

167-
Have a closer look at the [BERT](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_bert/) and [DeepLabV3](https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/) pages, where you can see how these models can be used once loaded.
167+
Have a closer look at the BERT and [DeepLabV3](https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/) pages, where you can see how these models can be used once loaded.
168168

169169
### Other ways to explore
170170

_posts/2019-07-18-pytorch-ecosystem.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ If you would like to have your project included in the PyTorch ecosystem and fea
4545

4646
## PyTorch Hub for reproducible research | New models
4747

48-
Since [launching](https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/) the PyTorch Hub in beta, we’ve received a lot of interest from the community including the contribution of many new models. Some of the latest include [U-Net for Brain MRI](https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/) contributed by researchers at Duke University, [Single Shot Detection](https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/) from NVIDIA and [Transformer-XL](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_transformerXL/) from HuggingFace.
48+
Since [launching](https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/) the PyTorch Hub in beta, we’ve received a lot of interest from the community including the contribution of many new models. Some of the latest include [U-Net for Brain MRI](https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/) contributed by researchers at Duke University, [Single Shot Detection](https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/) from NVIDIA and Transformer-XL from HuggingFace.
4949

5050
We’ve seen organic integration of the PyTorch Hub by folks like [paperswithcode](https://paperswithcode.com/), making it even easier for you to try out the state of the art in AI research. In addition, companies like [Seldon](https://github.com/axsaucedo/seldon-core/tree/pytorch_hub/examples/models/pytorchhub) provide production-level support for PyTorch Hub models on top of Kubernetes.
5151

_posts/2019-08-08-pytorch-1.2-and-domain-api-release.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -115,9 +115,9 @@ We are excited to see an active community around torchaudio and eager to further
115115

116116
## Torchtext 0.4 with supervised learning datasets
117117

118-
A key focus area of torchtext is to provide the fundamental elements to help accelerate NLP research. This includes easy access to commonly used datasets and basic preprocessing pipelines for working on raw text based data. The torchtext 0.4.0 release includes several popular supervised learning baselines with "one-command" data loading. A [tutorial](https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html) is included to show how to use the new datasets for text classification analysis. We also added and improved on a few functions such as [get_tokenizer](https://pytorch.org/text/data.html?highlight=get_tokenizer#torchtext.data.get_tokenizer) and [build_vocab_from_iterator](https://pytorch.org/text/vocab.html#build-vocab-from-iterator) to make it easier to implement future datasets. Additional examples can be found [here](https://github.com/pytorch/text/tree/master/examples/text_classification).
118+
A key focus area of torchtext is to provide the fundamental elements to help accelerate NLP research. This includes easy access to commonly used datasets and basic preprocessing pipelines for working on raw text based data. The torchtext 0.4.0 release includes several popular supervised learning baselines with "one-command" data loading. A [tutorial](https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html) is included to show how to use the new datasets for text classification analysis. We also added and improved on a few functions such as get_tokenizer and build_vocab_from_iterator to make it easier to implement future datasets. Additional examples can be found [here](https://github.com/pytorch/text/tree/master/examples/text_classification).
119119

120-
Text classification is an important task in Natural Language Processing with many applications, such as sentiment analysis. The new release includes several popular [text classification datasets](https://pytorch.org/text/datasets.html?highlight=textclassification#torchtext.datasets.TextClassificationDataset) for supervised learning including:
120+
Text classification is an important task in Natural Language Processing with many applications, such as sentiment analysis. The new release includes several popular text classification datasets for supervised learning including:
121121

122122
* AG_NEWS
123123
* SogouNews

0 commit comments

Comments
 (0)