ES|QL COMPLETION command
Serverless Stack
The COMPLETION command allows you to send prompts and context to a Large Language Model (LLM) directly within your ES|QL queries, to perform text generation tasks.
Every row processed by the COMPLETION command generates a separate API call to the LLM endpoint.
Be careful to test with small datasets first before running on production data or in automated workflows, to avoid unexpected costs.
Best practices:
- Start with dry runs: Validate your query logic and row counts by running without COMPLETIONinitially. Use| STATS count = COUNT(*)to check result size.
- Filter first: Use WHEREclauses to limit rows before applyingCOMPLETION.
- Test with LIMIT: Always start with a lowLIMITand gradually increase.
- Monitor usage: Track your LLM API consumption and costs.
Syntax
COMPLETION [column =] prompt WITH { "inference_id" : "my_inference_endpoint" }
		
	
COMPLETION [column =] prompt WITH my_inference_endpoint
		
	Parameters
- column
- (Optional) The name of the output column containing the LLM's response.
If not specified, the results will be stored in a column named completion. If the specified column already exists, it will be overwritten with the new results.
- prompt
- The input text or expression used to prompt the LLM. This can be a string literal or a reference to a column containing text.
- my_inference_endpoint
- The ID of the inference endpoint to use for the task.
The inference endpoint must be configured with the completiontask type.
Description
The COMPLETION command provides a general-purpose interface for
text generation tasks using a Large Language Model (LLM) in ES|QL.
COMPLETION supports a wide range of text generation tasks. Depending on your
prompt and the model you use, you can perform arbitrary text generation tasks
including:
- Question answering
- Summarization
- Translation
- Content rewriting
- Creative generation
Requirements
To use this command, you must deploy your LLM model in Elasticsearch as
an inference endpoint with the
task type completion.
COMPLETION commands may time out when processing large datasets or complex prompts. The default timeout is 10 minutes, but you can increase this limit if necessary.
How you increase the timeout depends on your deployment type:
- You can adjust Elasticsearch settings in the Elastic Cloud Console
- You can also adjust the search.default_search_timeoutcluster setting using Kibana's Advanced settings
- You can configure at the cluster level by setting search.default_search_timeoutinelasticsearch.ymlor updating via Cluster Settings API
- You can also adjust the search:timeoutsetting using Kibana's Advanced settings
- Alternatively, you can add timeout parameters to individual queries
- Requires a manual override from Elastic Support because you cannot modify timeout settings directly
If you don't want to increase the timeout limit, try the following:
- Reduce data volume with LIMITor more selective filters before theCOMPLETIONcommand
- Split complex operations into multiple simpler queries
- Configure your HTTP client's response timeout (Refer to HTTP client configuration)
Examples
Use the default column name (results stored in completion column):
ROW question = "What is Elasticsearch?"
| COMPLETION question WITH { "inference_id" : "my_inference_endpoint" }
| KEEP question, completion
		
	| question:keyword | completion:keyword | 
|---|---|
| What is Elasticsearch? | A distributed search and analytics engine | 
Specify the output column (results stored in answer column):
ROW question = "What is Elasticsearch?"
| COMPLETION answer = question WITH { "inference_id" : "my_inference_endpoint" }
| KEEP question, answer
		
	| question:keyword | answer:keyword | 
|---|---|
| What is Elasticsearch? | A distributed search and analytics engine | 
Summarize the top 10 highest-rated movies using a prompt:
FROM movies
| SORT rating DESC
| LIMIT 10
| EVAL prompt = CONCAT(
   "Summarize this movie using the following information: \n",
   "Title: ", title, "\n",
   "Synopsis: ", synopsis, "\n",
   "Actors: ", MV_CONCAT(actors, ", "), "\n",
  )
| COMPLETION summary = prompt WITH { "inference_id" : "my_inference_endpoint" }
| KEEP title, summary, rating
		
	| title:keyword | summary:keyword | rating:double | 
|---|---|---|
| The Shawshank Redemption | A tale of hope and redemption in prison. | 9.3 | 
| The Godfather | A mafia family's rise and fall. | 9.2 | 
| The Dark Knight | Batman battles the Joker in Gotham. | 9.0 | 
| Pulp Fiction | Interconnected crime stories with dark humor. | 8.9 | 
| Fight Club | A man starts an underground fight club. | 8.8 | 
| Inception | A thief steals secrets through dreams. | 8.8 | 
| The Matrix | A hacker discovers reality is a simulation. | 8.7 | 
| Parasite | Class conflict between two families. | 8.6 | 
| Interstellar | A team explores space to save humanity. | 8.6 | 
| The Prestige | Rival magicians engage in dangerous competition. | 8.5 |