Skip to content

Commit cd11e64

Browse files
authored
Merge pull request pytorch#736 from cccclai/site
Update mobile homepage for 1.9
2 parents e7c3fbe + 494cf4a commit cd11e64

File tree

4 files changed

+34
-12
lines changed

4 files changed

+34
-12
lines changed

_mobile/android.md

+23-7
Original file line numberDiff line numberDiff line change
@@ -34,9 +34,10 @@ model.eval()
3434
example = torch.rand(1, 3, 224, 224)
3535
traced_script_module = torch.jit.trace(model, example)
3636
traced_script_module_optimized = optimize_for_mobile(traced_script_module)
37-
traced_script_module_optimized.save("app/src/main/assets/model.pt")
37+
traced_script_module_optimized._save_for_lite_interpreter("app/src/main/assets/model.ptl")
38+
3839
```
39-
If everything works well, we should have our model - `model.pt` generated in the assets folder of android application.
40+
If everything works well, we should have our model - `model.ptl` generated in the assets folder of android application.
4041
That will be packaged inside android application as `asset` and can be used on the device.
4142

4243
More details about TorchScript you can find in [tutorials on pytorch.org](https://pytorch.org/docs/stable/jit.html)
@@ -64,8 +65,8 @@ repositories {
6465
}
6566
6667
dependencies {
67-
implementation 'org.pytorch:pytorch_android:1.8.0'
68-
implementation 'org.pytorch:pytorch_android_torchvision:1.8.0'
68+
implementation 'org.pytorch:pytorch_android_lite:1.9.0'
69+
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
6970
}
7071
```
7172
Where `org.pytorch:pytorch_android` is the main dependency with PyTorch Android API, including libtorch native library for all 4 android abis (armeabi-v7a, arm64-v8a, x86, x86_64).
@@ -81,11 +82,11 @@ As a first step we read `image.jpg` to `android.graphics.Bitmap` using the stand
8182
Bitmap bitmap = BitmapFactory.decodeStream(getAssets().open("image.jpg"));
8283
```
8384

84-
#### 5. Loading TorchScript Module
85+
#### 5. Loading Mobile Module
8586
```
86-
Module module = Module.load(assetFilePath(this, "model.pt"));
87+
Module module = Module.load(assetFilePath(this, "model.ptl"));
8788
```
88-
`org.pytorch.Module` represents `torch::jit::script::Module` that can be loaded with `load` method specifying file path to the serialized to file model.
89+
`org.pytorch.Module` represents `torch::jit::mobile::Module` that can be loaded with `load` method specifying file path to the serialized to file model.
8990

9091
#### 6. Preparing Input
9192
```
@@ -389,6 +390,21 @@ SELECTED_OP_LIST=MobileNetV2.yaml scripts/build_pytorch_android.sh arm64-v8a
389390

390391
After successful build you can integrate the result aar files to your android gradle project, following the steps from previous section of this tutorial (Building PyTorch Android from Source).
391392

393+
## Use PyTorch JIT interpreter
394+
395+
PyTorch JIT interpreter is the default interpreter before 1.9 (a version of our PyTorch interpreter that is not as size-efficient). It will still be supported in 1.9, and can be used via `build.gradle`:
396+
```
397+
repositories {
398+
jcenter()
399+
}
400+
401+
dependencies {
402+
implementation 'org.pytorch:pytorch_android:1.9.0'
403+
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
404+
}
405+
```
406+
407+
392408
## Android Tutorials
393409

394410
Watch the following [video](https://youtu.be/5Lxuu16_28o) as PyTorch Partner Engineer Brad Heintz walks through steps for setting up the PyTorch Runtime for Android projects:

_mobile/home.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -24,15 +24,14 @@ PyTorch Mobile is in beta stage right now, and is already in wide scale producti
2424
* Support for tracing and scripting via TorchScript IR
2525
* Support for XNNPACK floating point kernel libraries for Arm CPUs
2626
* Integration of QNNPACK for 8-bit quantized kernels. Includes support for per-channel quantization, dynamic quantization and more
27-
* Build level optimization and selective compilation depending on the operators needed for user applications, i.e., the final binary size of the app is determined by the actual operators the app needs
27+
* Provides an [efficient mobile interpreter in Android and iOS](https://pytorch.org/tutorials/prototype/lite_interpreter.html). Also supports build level optimization and selective compilation depending on the operators needed for user applications (i.e., the final binary size of the app is determined by the actual operators the app needs).
2828
* Streamline model optimization via optimize_for_mobile
2929
* Support for hardware backends like GPU, DSP, and NPU will be available soon in Beta
3030

3131

3232
## Prototypes
3333
We have launched the following features in prototype, available in the PyTorch nightly releases, and would love to get your feedback on the [PyTorch forums](https://discuss.pytorch.org/c/mobile/18):
3434

35-
* Runtime binary size reduction via our [Lite Interpreter](https://pytorch.org/tutorials/prototype/lite_interpreter.html)
3635
* GPU support on [iOS via Metal](https://pytorch.org/tutorials/prototype/ios_gpu_workflow.html)
3736
* GPU support on [Android via Vulkan](https://pytorch.org/tutorials/prototype/vulkan_workflow.html)
3837
* DSP and NPU support on Android via [Google NNAPI](https://pytorch.org/tutorials/prototype/nnapi_mobilenetv2.html)

_mobile/ios.md

+10-3
Original file line numberDiff line numberDiff line change
@@ -94,10 +94,10 @@ private lazy var module: TorchModule = {
9494
}
9595
}()
9696
```
97-
Note that the `TorchModule` Class is an Objective-C wrapper of `torch::jit::script::Module`.
97+
Note that the `TorchModule` Class is an Objective-C wrapper of `torch::jit::mobile::Module`.
9898

9999
```cpp
100-
torch::jit::script::Module module = torch::jit::load(filePath.UTF8String);
100+
torch::jit::mobile::Module module = torch::jit::_load_for_mobile(filePath.UTF8String);
101101
```
102102
Since Swift can not talk to C++ directly, we have to either use an Objective-C class as a bridge, or create a C wrapper for the C++ library. For demo purpose, we're going to wrap everything in this Objective-C class.
103103

@@ -251,7 +251,8 @@ To use the custom built libraries the project, replace `#import <LibTorch/LibTor
251251
#include "caffe2/core/timer.h"
252252
#include "caffe2/utils/string_utils.h"
253253
#include "torch/csrc/autograd/grad_mode.h"
254-
#include "torch/csrc/jit/serialization/import.h"
254+
#include "torch/csrc/jit/mobile/import.h"
255+
#include "torch/csrc/jit/mobile/module.h"
255256
#include "torch/script.h"
256257
```
257258

@@ -289,6 +290,12 @@ SELECTED_OP_LIST=MobileNetV2.yaml BUILD_PYTORCH_MOBILE=1 IOS_ARCH=arm64 ./script
289290
torch::jit::GraphOptimizerEnabledGuard guard(false);
290291
```
291292
293+
## Use PyTorch JIT interpreter
294+
PyTorch JIT interpreter is the default interpreter before 1.9 (a version of our PyTorch interpreter that is not as size-efficient). It will still be supported in 1.9, and can be used in CocoaPods:
295+
```
296+
pod 'LibTorch', '~>1.9.0'
297+
```
298+
292299
## iOS Tutorials
293300
294301
Watch the following [video](https://youtu.be/amTepUIR93k) as PyTorch Partner Engineer Brad Heintz walks through steps for setting up the PyTorch Runtime for iOS projects:

assets/images/pytorch-mobile.png

-42.3 KB
Loading

0 commit comments

Comments
 (0)