Skip to content

[Feature Request]: Support loading a different model/key for RunInference #27628

Closed
@damccorm

Description

@damccorm

What would you like to happen?

Today, many users have pipelines that choose a single model for inference from 100s or 1000s of models based on properties of the data. Unfortunately, RunInference does not support this use case. We should support a new use case for RunInference that allows a single keyed RunInference transform to serve a different model for each key.

See design doc here - https://docs.google.com/document/d/1kj3FyWRbJu1KhViX07Z0Gk0MU0842jhYRhI-DMhhcv4/edit?usp=sharing

Issue Priority

Priority: 2 (default / most feature requests should be filed as P2)

Issue Components

  • Component: Python SDK
  • Component: Java SDK
  • Component: Go SDK
  • Component: Typescript SDK
  • Component: IO connector
  • Component: Beam examples
  • Component: Beam playground
  • Component: Beam katas
  • Component: Website
  • Component: Spark Runner
  • Component: Flink Runner
  • Component: Samza Runner
  • Component: Twister2 Runner
  • Component: Hazelcast Jet Runner
  • Component: Google Cloud Dataflow Runner

Metadata

Metadata

Assignees

Labels

P2done & doneIssue has been reviewed after it was closed for verification, followups, etc.new featurepython

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions