Skip to content

Commit 05b7ee6

Browse files
Auto-generated API code (#2867)
1 parent d1ba142 commit 05b7ee6

File tree

5 files changed

+178
-21
lines changed

5 files changed

+178
-21
lines changed

docs/reference.asciidoc

Lines changed: 40 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1734,7 +1734,7 @@ client.search({ ... })
17341734
** *`profile` (Optional, boolean)*: Set to `true` to return detailed timing information about the execution of individual components in a search request. NOTE: This is a debugging tool and adds significant overhead to search execution.
17351735
** *`query` (Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type })*: The search definition using the Query DSL.
17361736
** *`rescore` (Optional, { window_size, query, learning_to_rank } | { window_size, query, learning_to_rank }[])*: Can be used to improve precision by reordering just the top (for example 100 - 500) documents returned by the `query` and `post_filter` phases.
1737-
** *`retriever` (Optional, { standard, knn, rrf, text_similarity_reranker, rule })*: A retriever is a specification to describe top documents returned from a search. A retriever replaces other elements of the search API that also return top documents such as `query` and `knn`.
1737+
** *`retriever` (Optional, { standard, knn, rrf, text_similarity_reranker, rule, rescorer, linear, pinned })*: A retriever is a specification to describe top documents returned from a search. A retriever replaces other elements of the search API that also return top documents such as `query` and `knn`.
17381738
** *`script_fields` (Optional, Record<string, { script, ignore_failure }>)*: Retrieve a script evaluation (based on different fields) for each hit.
17391739
** *`search_after` (Optional, number | number | string | boolean | null | User-defined value[])*: Used to retrieve the next page of hits using a set of sort values from the previous page.
17401740
** *`size` (Optional, number)*: The number of hits to return, which must not be negative. By default, you cannot page through more than 10,000 hits using the `from` and `size` parameters. To page through more hits, use the `search_after` property.
@@ -7231,9 +7231,45 @@ Changes dynamic index settings in real time.
72317231
For data streams, index setting changes are applied to all backing indices by default.
72327232

72337233
To revert a setting to the default value, use a null value.
7234-
The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation.
7234+
The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation.
72357235
To preserve existing settings from being updated, set the `preserve_existing` parameter to `true`.
72367236

7237+
There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example:
7238+
7239+
----
7240+
{
7241+
"number_of_replicas": 1
7242+
}
7243+
----
7244+
7245+
Or you can use an `index` setting object:
7246+
----
7247+
{
7248+
"index": {
7249+
"number_of_replicas": 1
7250+
}
7251+
}
7252+
----
7253+
7254+
Or you can use dot annotation:
7255+
----
7256+
{
7257+
"index.number_of_replicas": 1
7258+
}
7259+
----
7260+
7261+
Or you can embed any of the aforementioned options in a `settings` object. For example:
7262+
7263+
----
7264+
{
7265+
"settings": {
7266+
"index": {
7267+
"number_of_replicas": 1
7268+
}
7269+
}
7270+
}
7271+
----
7272+
72377273
NOTE: You can only define new analyzers on closed indices.
72387274
To add an analyzer, you must close the index, define the analyzer, and reopen the index.
72397275
You cannot close the write index of a data stream.
@@ -8013,12 +8049,9 @@ Valid values are: `all`, `open`, `closed`, `hidden`, `none`.
80138049
==== chat_completion_unified
80148050
Perform chat completion inference
80158051

8016-
The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
8052+
The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
80178053
It only works with the `chat_completion` task type for `openai` and `elastic` inference services.
80188054

8019-
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face.
8020-
For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
8021-
80228055
NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming.
80238056
The Chat completion inference API and the Stream inference API differ in their response structure and capabilities.
80248057
The Chat completion inference API provides more comprehensive customization options through more fields and function calling support.
@@ -8421,7 +8454,7 @@ client.inference.putGooglevertexai({ task_type, googlevertexai_inference_id, ser
84218454
==== Arguments
84228455

84238456
* *Request (object):*
8424-
** *`task_type` (Enum("rerank" | "text_embedding"))*: The type of the inference task that the model will perform.
8457+
** *`task_type` (Enum("rerank" | "text_embedding" | "completion" | "chat_completion"))*: The type of the inference task that the model will perform.
84258458
** *`googlevertexai_inference_id` (string)*: The unique identifier of the inference endpoint.
84268459
** *`service` (Enum("googlevertexai"))*: The type of service supported for the specified task type. In this case, `googlevertexai`.
84278460
** *`service_settings` ({ location, model_id, project_id, rate_limit, service_account_json })*: Settings used to install the inference model. These settings are specific to the `googlevertexai` service.

src/api/api/indices.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1854,7 +1854,7 @@ export default class Indices {
18541854
}
18551855

18561856
/**
1857-
* Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default. To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the `preserve_existing` parameter to `true`. NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
1857+
* Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default. To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation. To preserve existing settings from being updated, set the `preserve_existing` parameter to `true`. There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example: ``` { "number_of_replicas": 1 } ``` Or you can use an `index` setting object: ``` { "index": { "number_of_replicas": 1 } } ``` Or you can use dot annotation: ``` { "index.number_of_replicas": 1 } ``` Or you can embed any of the aforementioned options in a `settings` object. For example: ``` { "settings": { "index": { "number_of_replicas": 1 } } } ``` NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
18581858
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/indices-update-settings.html | Elasticsearch API documentation}
18591859
*/
18601860
async putSettings (this: That, params: T.IndicesPutSettingsRequest | TB.IndicesPutSettingsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.IndicesPutSettingsResponse>

src/api/api/inference.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ export default class Inference {
4545
}
4646

4747
/**
48-
* Perform chat completion inference The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the `chat_completion` task type for `openai` and `elastic` inference services. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the `openai` service or the `elastic` service, use the Chat completion inference API.
48+
* Perform chat completion inference The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the `chat_completion` task type for `openai` and `elastic` inference services. NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the `openai` service or the `elastic` service, use the Chat completion inference API.
4949
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/chat-completion-inference-api.html | Elasticsearch API documentation}
5050
*/
5151
async chatCompletionUnified (this: That, params: T.InferenceChatCompletionUnifiedRequest | TB.InferenceChatCompletionUnifiedRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferenceChatCompletionUnifiedResponse>

src/api/types.ts

Lines changed: 68 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2243,7 +2243,7 @@ export type EpochTime<Unit = unknown> = Unit
22432243

22442244
export interface ErrorCauseKeys {
22452245
type: string
2246-
reason?: string
2246+
reason?: string | null
22472247
stack_trace?: string
22482248
caused_by?: ErrorCause
22492249
root_cause?: ErrorCause[]
@@ -2426,6 +2426,12 @@ export interface InlineGetKeys<TDocument = unknown> {
24262426
export type InlineGet<TDocument = unknown> = InlineGetKeys<TDocument>
24272427
& { [property: string]: any }
24282428

2429+
export interface InnerRetriever {
2430+
retriever: RetrieverContainer
2431+
weight: float
2432+
normalizer: ScoreNormalizer
2433+
}
2434+
24292435
export type Ip = string
24302436

24312437
export interface KnnQuery extends QueryDslQueryBase {
@@ -2471,6 +2477,11 @@ export type Level = 'cluster' | 'indices' | 'shards'
24712477

24722478
export type LifecycleOperationMode = 'RUNNING' | 'STOPPING' | 'STOPPED'
24732479

2480+
export interface LinearRetriever extends RetrieverBase {
2481+
retrievers?: InnerRetriever[]
2482+
rank_window_size: integer
2483+
}
2484+
24742485
export type MapboxVectorTiles = ArrayBuffer
24752486

24762487
export interface MergesStats {
@@ -2559,6 +2570,13 @@ export type Password = string
25592570

25602571
export type Percentage = string | float
25612572

2573+
export interface PinnedRetriever extends RetrieverBase {
2574+
retriever: RetrieverContainer
2575+
ids?: string[]
2576+
docs?: SpecifiedDocument[]
2577+
rank_window_size: integer
2578+
}
2579+
25622580
export type PipelineName = string
25632581

25642582
export interface PluginStats {
@@ -2644,6 +2662,11 @@ export interface RescoreVector {
26442662
oversample: float
26452663
}
26462664

2665+
export interface RescorerRetriever extends RetrieverBase {
2666+
retriever: RetrieverContainer
2667+
rescore: SearchRescore | SearchRescore[]
2668+
}
2669+
26472670
export type Result = 'created' | 'updated' | 'deleted' | 'not_found' | 'noop'
26482671

26492672
export interface Retries {
@@ -2654,6 +2677,7 @@ export interface Retries {
26542677
export interface RetrieverBase {
26552678
filter?: QueryDslQueryContainer | QueryDslQueryContainer[]
26562679
min_score?: float
2680+
_name?: string
26572681
}
26582682

26592683
export interface RetrieverContainer {
@@ -2662,6 +2686,9 @@ export interface RetrieverContainer {
26622686
rrf?: RRFRetriever
26632687
text_similarity_reranker?: TextSimilarityReranker
26642688
rule?: RuleRetriever
2689+
rescorer?: RescorerRetriever
2690+
linear?: LinearRetriever
2691+
pinned?: PinnedRetriever
26652692
}
26662693

26672694
export type Routing = string
@@ -2672,14 +2699,16 @@ export interface RrfRank {
26722699
}
26732700

26742701
export interface RuleRetriever extends RetrieverBase {
2675-
ruleset_ids: Id[]
2702+
ruleset_ids: Id | Id[]
26762703
match_criteria: any
26772704
retriever: RetrieverContainer
26782705
rank_window_size?: integer
26792706
}
26802707

26812708
export type ScalarValue = long | double | string | boolean | null
26822709

2710+
export type ScoreNormalizer = 'none' | 'minmax' | 'l2_norm'
2711+
26832712
export interface ScoreSort {
26842713
order?: SortOrder
26852714
}
@@ -2828,6 +2857,11 @@ export type SortOrder = 'asc' | 'desc'
28282857

28292858
export type SortResults = FieldValue[]
28302859

2860+
export interface SpecifiedDocument {
2861+
index?: IndexName
2862+
id: Id
2863+
}
2864+
28312865
export interface StandardRetriever extends RetrieverBase {
28322866
query?: QueryDslQueryContainer
28332867
search_after?: SortResults
@@ -6108,7 +6142,7 @@ export type QueryDslGeoDistanceQuery = QueryDslGeoDistanceQueryKeys
61086142
export type QueryDslGeoExecution = 'memory' | 'indexed'
61096143

61106144
export interface QueryDslGeoGridQuery extends QueryDslQueryBase {
6111-
geogrid?: GeoTile
6145+
geotile?: GeoTile
61126146
geohash?: GeoHash
61136147
geohex?: GeoHexCell
61146148
}
@@ -6178,6 +6212,8 @@ export interface QueryDslIntervalsContainer {
61786212
fuzzy?: QueryDslIntervalsFuzzy
61796213
match?: QueryDslIntervalsMatch
61806214
prefix?: QueryDslIntervalsPrefix
6215+
range?: QueryDslIntervalsRange
6216+
regexp?: QueryDslIntervalsRegexp
61816217
wildcard?: QueryDslIntervalsWildcard
61826218
}
61836219

@@ -6223,9 +6259,26 @@ export interface QueryDslIntervalsQuery extends QueryDslQueryBase {
62236259
fuzzy?: QueryDslIntervalsFuzzy
62246260
match?: QueryDslIntervalsMatch
62256261
prefix?: QueryDslIntervalsPrefix
6262+
range?: QueryDslIntervalsRange
6263+
regexp?: QueryDslIntervalsRegexp
62266264
wildcard?: QueryDslIntervalsWildcard
62276265
}
62286266

6267+
export interface QueryDslIntervalsRange {
6268+
analyzer?: string
6269+
gte?: string
6270+
gt?: string
6271+
lte?: string
6272+
lt?: string
6273+
use_field?: Field
6274+
}
6275+
6276+
export interface QueryDslIntervalsRegexp {
6277+
analyzer?: string
6278+
pattern: string
6279+
use_field?: Field
6280+
}
6281+
62296282
export interface QueryDslIntervalsWildcard {
62306283
analyzer?: string
62316284
pattern: string
@@ -6543,7 +6596,8 @@ export interface QueryDslRegexpQuery extends QueryDslQueryBase {
65436596

65446597
export interface QueryDslRuleQuery extends QueryDslQueryBase {
65456598
organic: QueryDslQueryContainer
6546-
ruleset_ids: Id[]
6599+
ruleset_ids?: Id | Id[]
6600+
ruleset_id?: string
65476601
match_criteria: any
65486602
}
65496603

@@ -13208,7 +13262,7 @@ export interface InferenceGoogleVertexAITaskSettings {
1320813262
top_n?: integer
1320913263
}
1321013264

13211-
export type InferenceGoogleVertexAITaskType = 'rerank' | 'text_embedding'
13265+
export type InferenceGoogleVertexAITaskType = 'rerank' | 'text_embedding' | 'completion' | 'chat_completion'
1321213266

1321313267
export interface InferenceHuggingFaceServiceSettings {
1321413268
api_key: string
@@ -19900,6 +19954,14 @@ export interface SlmSnapshotLifecycle {
1990019954
stats: SlmStatistics
1990119955
}
1990219956

19957+
export interface SlmSnapshotPolicyStats {
19958+
policy: string
19959+
snapshots_taken: long
19960+
snapshots_failed: long
19961+
snapshots_deleted: long
19962+
snapshot_deletion_failures: long
19963+
}
19964+
1990319965
export interface SlmStatistics {
1990419966
retention_deletion_time?: Duration
1990519967
retention_deletion_time_millis?: DurationValue<UnitMillis>
@@ -19965,7 +20027,7 @@ export interface SlmGetStatsResponse {
1996520027
total_snapshot_deletion_failures: long
1996620028
total_snapshots_failed: long
1996720029
total_snapshots_taken: long
19968-
policy_stats: string[]
20030+
policy_stats: SlmSnapshotPolicyStats[]
1996920031
}
1997020032

1997120033
export interface SlmGetStatusRequest extends RequestBase {

0 commit comments

Comments
 (0)