Skip to content

Commit 4dc71df

Browse files
author
deadeyegoodwin
authored
Update documentation to remove beta from BLS (triton-inference-server#100)
1 parent ea58712 commit 4dc71df

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ any C++ code.
5151
- [Important Notes](#important-notes)
5252
- [Error Handling](#error-handling)
5353
- [Managing Shared Memory](#managing-shared-memory)
54-
- [Business Logic Scripting (beta)](#business-logic-scripting-beta)
54+
- [Business Logic Scripting](#business-logic-scripting)
5555
- [Limitations](#limitations)
5656
- [Interoperability and GPU Support](#interoperability-and-gpu-support)
5757
- [`pb_utils.Tensor.to_dlpack() -> PyCapsule`](#pb_utilstensorto_dlpack---pycapsule)
@@ -142,7 +142,7 @@ $ make install
142142

143143
The following required Triton repositories will be pulled and used in
144144
the build. If the CMake variables below are not specified, "main" branch
145-
of those repositories will be used. \<GIT\_BRANCH\_NAME\> should be the same
145+
of those repositories will be used. \<GIT\_BRANCH\_NAME\> should be the same
146146
as the Python backend repository branch that you are trying to compile.
147147

148148
* triton-inference-server/backend: -DTRITON_BACKEND_REPO_TAG=\<GIT\_BRANCH\_NAME\>
@@ -537,7 +537,7 @@ properly set the `--shm-size` flag depending on the size of your inputs and
537537
outputs. The default value for docker run command is `64MB` which is very
538538
small.
539539

540-
# Business Logic Scripting (beta)
540+
# Business Logic Scripting
541541

542542
Triton's
543543
[ensemble](https://github.com/triton-inference-server/server/blob/main/docs/architecture.md#ensemble-models)
@@ -547,7 +547,7 @@ many other use cases that are not supported because as part of the model
547547
pipeline they require loops, conditionals (if-then-else), data-dependent
548548
control-flow and other custom logic to be intermixed with model execution. We
549549
call this combination of custom logic and model executions *Business Logic
550-
Scripting (BLS)*.
550+
Scripting (BLS)*.
551551

552552
Starting from 21.08, you can implement BLS in your Python model. A new set of
553553
utility functions allows you to execute inference requests on other models being

examples/bls/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@
2929
# BLS Example
3030

3131
In this section we demonstrate an end-to-end example for
32-
[BLS](../../README.md#business-logic-scripting-beta) in Python backend. The
32+
[BLS](../../README.md#business-logic-scripting) in Python backend. The
3333
[model repository](https://github.com/triton-inference-server/server/blob/main/docs/model_repository.md)
3434
should contain [pytorch](../pytorch), [addsub](../add_sub). The
3535
[pytorch](../pytorch) and [addsub](../add_sub) models calculate the sum and

0 commit comments

Comments
 (0)