@@ -80,45 +80,45 @@ any C++ code.
8080
81811 . Run the Triton Inference Server container.
8282```
83- $ docker run --shm-size=1g --ulimit memlock=-1 -p 8000:8000 -p 8001:8001 -p 8002:8002 --ulimit stack=67108864 -ti nvcr.io/nvidia/tritonserver:<xx.yy>-py3
83+ docker run --shm-size=1g --ulimit memlock=-1 -p 8000:8000 -p 8001:8001 -p 8002:8002 --ulimit stack=67108864 -ti nvcr.io/nvidia/tritonserver:<xx.yy>-py3
8484```
8585Replace \< xx.yy\> with the Triton version (e.g. 21.05).
8686
87872 . Inside the container, clone the Python backend repository.
8888
8989```
90- $ git clone https://github.com/triton-inference-server/python_backend -b r<xx.yy>
90+ git clone https://github.com/triton-inference-server/python_backend -b r<xx.yy>
9191```
9292
93933 . Install example model.
9494```
95- $ cd python_backend
96- $ mkdir -p models/add_sub/1/
97- $ cp examples/add_sub/model.py models/add_sub/1/model.py
98- $ cp examples/add_sub/config.pbtxt models/add_sub/config.pbtxt
95+ cd python_backend
96+ mkdir -p models/add_sub/1/
97+ cp examples/add_sub/model.py models/add_sub/1/model.py
98+ cp examples/add_sub/config.pbtxt models/add_sub/config.pbtxt
9999```
100100
1011014 . Start the Triton server.
102102
103103```
104- $ tritonserver --model-repository `pwd`/models
104+ tritonserver --model-repository `pwd`/models
105105```
106106
1071075 . In the host machine, start the client container.
108108
109109```
110- docker run -ti --net host nvcr.io/nvidia/tritonserver:<xx.yy>-py3-sdk /bin/bash
110+ docker run -ti --net host nvcr.io/nvidia/tritonserver:<xx.yy>-py3-sdk /bin/bash
111111```
112112
1131136 . In the client container, clone the Python backend repository.
114114
115115```
116- $ git clone https://github.com/triton-inference-server/python_backend -b r<xx.yy>
116+ git clone https://github.com/triton-inference-server/python_backend -b r<xx.yy>
117117```
118118
1191197 . Run the example client.
120120```
121- $ python3 python_backend/examples/add_sub/client.py
121+ python3 python_backend/examples/add_sub/client.py
122122```
123123
124124## Building from Source
@@ -145,10 +145,10 @@ sudo apt-get install rapidjson-dev libarchive-dev zlib1g-dev
145145 r21.06).
146146
147147```
148- $ mkdir build
149- $ cd build
150- $ cmake -DTRITON_ENABLE_GPU=ON -DTRITON_BACKEND_REPO_TAG=<GIT_BRANCH_NAME> -DTRITON_COMMON_REPO_TAG=<GIT_BRANCH_NAME> -DTRITON_CORE_REPO_TAG=<GIT_BRANCH_NAME> -DCMAKE_INSTALL_PREFIX:PATH=`pwd`/install ..
151- $ make install
148+ mkdir build
149+ cd build
150+ cmake -DTRITON_ENABLE_GPU=ON -DTRITON_BACKEND_REPO_TAG=<GIT_BRANCH_NAME> -DTRITON_COMMON_REPO_TAG=<GIT_BRANCH_NAME> -DTRITON_CORE_REPO_TAG=<GIT_BRANCH_NAME> -DCMAKE_INSTALL_PREFIX:PATH=`pwd`/install ..
151+ make install
152152```
153153
154154The following required Triton repositories will be pulled and used in
@@ -167,21 +167,21 @@ this location is `/opt/tritonserver`.
1671673 . Copy example model and configuration
168168
169169```
170- $ mkdir -p models/add_sub/1/
171- $ cp examples/add_sub/model.py models/add_sub/1/model.py
172- $ cp examples/add_sub/config.pbtxt models/add_sub/config.pbtxt
170+ mkdir -p models/add_sub/1/
171+ cp examples/add_sub/model.py models/add_sub/1/model.py
172+ cp examples/add_sub/config.pbtxt models/add_sub/config.pbtxt
173173```
174174
1751754 . Start the Triton Server
176176
177177```
178- $ /opt/tritonserver/bin/tritonserver --model-repository=`pwd`/models
178+ /opt/tritonserver/bin/tritonserver --model-repository=`pwd`/models
179179```
180180
1811815 . Use the client app to perform inference
182182
183183```
184- $ python3 examples/add_sub/client.py
184+ python3 examples/add_sub/client.py
185185```
186186
187187## Usage
@@ -592,19 +592,19 @@ can read more on how
592592 (replace \< GIT\_ BRANCH\_ NAME\> with the branch name that you want to use,
593593 for release branches it should be r\< xx.yy\> ):
594594``` bash
595- $ git clone https://github.com/triton-inference-server/python_backend -b
595+ git clone https://github.com/triton-inference-server/python_backend -b
596596< GIT_BRANCH_NAME>
597- $ cd python_backend
598- $ mkdir build && cd build
599- $ cmake -DTRITON_ENABLE_GPU=ON -DTRITON_BACKEND_REPO_TAG=< GIT_BRANCH_NAME> -DTRITON_COMMON_REPO_TAG=< GIT_BRANCH_NAME> -DTRITON_CORE_REPO_TAG=< GIT_BRANCH_NAME> -DCMAKE_INSTALL_PREFIX:PATH=` pwd` /install ..
600- $ make triton-python-backend-stub
597+ cd python_backend
598+ mkdir build && cd build
599+ cmake -DTRITON_ENABLE_GPU=ON -DTRITON_BACKEND_REPO_TAG=< GIT_BRANCH_NAME> -DTRITON_COMMON_REPO_TAG=< GIT_BRANCH_NAME> -DTRITON_CORE_REPO_TAG=< GIT_BRANCH_NAME> -DCMAKE_INSTALL_PREFIX:PATH=` pwd` /install ..
600+ make triton-python-backend-stub
601601```
602602
603603Now, you have access to a Python backend stub with your Python version. You can verify
604604that using ` ldd ` :
605605
606606```
607- $ ldd triton_python_backend_stub
607+ ldd triton_python_backend_stub
608608...
609609libpython3.6m.so.1.0 => /home/ubuntu/envs/miniconda3/envs/python-3-6/lib/libpython3.6m.so.1.0 (0x00007fbb69cf3000)
610610...
@@ -643,7 +643,7 @@ environment is portable. You can create a tar file for your conda environment
643643using ` conda-pack ` command:
644644
645645```
646- $ conda-pack
646+ conda-pack
647647Collecting packages...
648648Packing environment at '/home/iman/miniconda3/envs/python-3-6' to 'python-3-6.tar.gz'
649649[########################################] | 100% Completed | 4.5s
@@ -1080,7 +1080,7 @@ Please see the [README.md](https://github.com/triton-inference-server/python_bac
10801080
10811081# Logging
10821082
1083- Your Python model can log information using the following methods:
1083+ Starting from 22.09 release, your Python model can log information using the following methods:
10841084
10851085``` python
10861086import triton_python_backend_utils as pb_utils
0 commit comments