Dense vector field type
Serverless Stack
The dense_vector field type stores dense vectors of numeric values. Dense vector fields are primarily used for k-nearest neighbor (kNN) search.
The dense_vector type does not support aggregations or sorting.
You add a dense_vector field as an array of numeric values based on element_type with float by default:
PUT my-index
{
"mappings": {
"properties": {
"my_vector": {
"type": "dense_vector",
"dims": 3
},
"my_text" : {
"type" : "keyword"
}
}
}
}
PUT my-index/_doc/1
{
"my_text" : "text1",
"my_vector" : [0.5, 10, 6]
}
PUT my-index/_doc/2
{
"my_text" : "text2",
"my_vector" : [-0.5, 10, 10]
}
Unlike most other data types, dense vectors are always single-valued. It is not possible to store multiple values in one dense_vector field.
A k-nearest neighbor (kNN) search finds the k nearest vectors to a query vector, as measured by a similarity metric.
Dense vector fields can be used to rank documents in script_score queries. This lets you perform a brute-force kNN search by scanning all documents and ranking them by similarity.
In many cases, a brute-force kNN search is not efficient enough. For this reason, the dense_vector type supports indexing vectors into a specialized data structure to support fast kNN retrieval through the knn option in the search API
Unmapped array fields of float elements with size between 128 and 4096 are dynamically mapped as dense_vector with a default similarity of cosine. You can override the default similarity by explicitly mapping the field as dense_vector with the desired similarity.
Indexing is enabled by default for dense vector fields and indexed as bbq_hnsw if dimensions are greater than or equal to 384, otherwise they are indexed as int8_hnsw.
Stack
In Elastic Stack 9.0, dense vector fields are always indexed as int8_hnsw.
When indexing is enabled, you can define the vector similarity to use in kNN search:
PUT my-index-2
{
"mappings": {
"properties": {
"my_vector": {
"type": "dense_vector",
"dims": 3,
"similarity": "dot_product"
}
}
}
}
Indexing vectors for approximate kNN search is an expensive process. It can take substantial time to ingest documents that contain vector fields with index enabled. See k-nearest neighbor (kNN) search to learn more about the memory requirements.
You can disable indexing by setting the index parameter to false:
PUT my-index-2
{
"mappings": {
"properties": {
"my_vector": {
"type": "dense_vector",
"dims": 3,
"index": false
}
}
}
}
Elasticsearch uses the HNSW algorithm to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved speed.
Serverless Stack
By default, dense_vector fields are not included in _source in responses from the _search, _msearch, _get, and _mget APIs.
This helps reduce response size and improve performance, especially in scenarios where vectors are used solely for similarity scoring and not required in the output.
To retrieve vector values explicitly, you can use:
The
fieldsoption to request specific vector fields directly:POST my-index-2/_search{ "fields": ["my_vector"] }
The
_source.exclude_vectorsflag to re-enable vector inclusion in_sourceresponses:POST my-index-2/_search{ "_source": { "exclude_vectors": false } }
For more context about the decision to exclude vectors from _source by default, read the blog post.
By default, dense_vector fields are not stored in _source on disk. This is also controlled by the index setting index.mapping.exclude_source_vectors.
This setting is enabled by default for newly created indices and can only be set at index creation time.
When enabled:
dense_vectorfields are removed from_sourceand the rest of the_sourceis stored as usual.- If a request includes
_sourceand vector values are needed (e.g., during recovery or reindex), the vectors are rehydrated from their internal format.
This setting is compatible with synthetic _source, where the entire _source document is reconstructed from columnar storage. In full synthetic mode, no _source is stored on disk, and all fields — including vectors — are rebuilt when needed.
When vector values are rehydrated (e.g., for reindex, recovery, or explicit _source requests), they are restored from their internal format. Internally, vectors are stored at float precision, so if they were originally indexed as higher-precision types (e.g., double or long), the rehydrated values will have reduced precision. This lossy representation is intended to save space while preserving search quality.
If you want to preserve the original vector values exactly as they were provided, you can re-enable vector storage in _source:
PUT my-index-include-vectors
{
"settings": {
"index.mapping.exclude_source_vectors": false
},
"mappings": {
"properties": {
"my_vector": {
"type": "dense_vector"
}
}
}
}
When this setting is disabled:
dense_vectorfields are stored as part of the_source, exactly as indexed.- The index will store both the original
_sourcevalue and the internal representation used for vector search, resulting in increased storage usage. - Vectors are once again returned in
_sourceby default in all relevant APIs, with no need to useexclude_vectorsorfields.
This configuration is appropriate when full source fidelity is required, such as for auditing or round-tripping exact input values.
The dense_vector type supports quantization to reduce the memory footprint required when searching float vectors. The three following quantization strategies are supported:
int8- Quantizes each dimension of the vector to 1-byte integers. This reduces the memory footprint by 75% (or 4x) at the cost of some accuracy.int4- Quantizes each dimension of the vector to half-byte integers. This reduces the memory footprint by 87% (or 8x) at the cost of accuracy.bbq- Better binary quantization which reduces each dimension to a single bit precision. This reduces the memory footprint by 96% (or 32x) at a larger cost of accuracy. Generally, oversampling during query time and reranking can help mitigate the accuracy loss.
When using a quantized format, you may want to oversample and rescore the results to improve accuracy. See oversampling and rescoring for more information.
To use a quantized index, you can set your index type to int8_hnsw, int4_hnsw, or bbq_hnsw. When indexing float vectors, the current default index type is bbq_hnsw for vectors with greater than or equal to 384 dimensions, otherwise it's int8_hnsw.
In Elastic Stack 9.0, dense vector fields are always indexed as int8_hnsw.
Quantized vectors can use oversampling and rescoring to improve accuracy on approximate kNN search results.
Quantization will continue to keep the raw float vector values on disk for reranking, reindexing, and quantization improvements over the lifetime of the data. This means disk usage will increase by ~25% for int8, ~12.5% for int4, and ~3.1% for bbq due to the overhead of storing the quantized and raw vectors.
int4 quantization requires an even number of vector dimensions.
bbq quantization only supports vector dimensions that are greater than 64.
Here is an example of how to create a byte-quantized index:
PUT my-byte-quantized-index
{
"mappings": {
"properties": {
"my_vector": {
"type": "dense_vector",
"dims": 3,
"index": true,
"index_options": {
"type": "int8_hnsw"
}
}
}
}
}
Here is an example of how to create a half-byte-quantized index:
PUT my-byte-quantized-index
{
"mappings": {
"properties": {
"my_vector": {
"type": "dense_vector",
"dims": 4,
"index": true,
"index_options": {
"type": "int4_hnsw"
}
}
}
}
}
Here is an example of how to create a binary quantized index:
PUT my-byte-quantized-index
{
"mappings": {
"properties": {
"my_vector": {
"type": "dense_vector",
"dims": 64,
"index": true,
"index_options": {
"type": "bbq_hnsw"
}
}
}
}
}
The following mapping parameters are accepted:
element_type- (Optional, string) The data type used to encode vectors. The supported data types are
float(default),byte, andbit.
Valid values for element_type
float- indexes a 4-byte floating-point value per dimension. This is the default value.
byte- indexes a 1-byte integer value per dimension.
bitindexes a single bit per dimension. Useful for very high-dimensional vectors or models that specifically support bit vectors. NOTE: when using
bit, the number of dimensions must be a multiple of 8 and must represent the number of bits.
dims- (Optional, integer) Number of vector dimensions. Can’t exceed
4096. Ifdimsis not specified, it will be set to the length of the first vector added to the field. index- (Optional, Boolean) If
true, you can search this field using the knn query or knn in _search . Defaults totrue.
similarity-
(Optional¹, string) The vector similarity metric to use in kNN search. Documents are ranked by their vector field’s similarity to the query vector. The
_scoreof each document will be derived from the similarity, in a way that ensures scores are positive and that a larger score corresponds to a higher ranking. Defaults tol2_normwhenelement_type: bitotherwise defaults tocosine.¹ This parameter can only be specified when
indexistrue.Notebitvectors only supportl2_normas their similarity metric.
Valid values for similarity
l2_normComputes similarity based on the L2 distance (also known as Euclidean distance) between the vectors. The document
_scoreis computed as1 / (1 + l2_norm(query, vector)^2).For
bitvectors, instead of usingl2_norm, thehammingdistance between the vectors is used. The_scoretransformation is(numBits - hamming(a, b)) / numBitsdot_productComputes the dot product of two unit vectors. This option provides an optimized way to perform cosine similarity. The constraints and computed score are defined by
element_type.When
element_typeisfloat, all vectors must be unit length, including both document and query vectors. The document_scoreis computed as(1 + dot_product(query, vector)) / 2.When
element_typeisbyte, all vectors must have the same length including both document and query vectors or results will be inaccurate. The document_scoreis computed as0.5 + (dot_product(query, vector) / (32768 * dims))wheredimsis the number of dimensions per vector.cosine- Computes the cosine similarity. During indexing Elasticsearch automatically normalizes vectors with
cosinesimilarity to unit length. This allows to internally usedot_productfor computing similarity, which is more efficient. Original un-normalized vectors can be still accessed through scripts. The document_scoreis computed as(1 + cosine(query, vector)) / 2. Thecosinesimilarity does not allow vectors with zero magnitude, since cosine is not defined in this case. max_inner_productComputes the maximum inner product of two vectors. This is similar to
dot_product, but doesn’t require vectors to be normalized. This means that each vector’s magnitude can significantly effect the score. The document_scoreis adjusted to prevent negative values. Formax_inner_productvalues< 0, the_scoreis1 / (1 + -1 * max_inner_product(query, vector)). For non-negativemax_inner_productresults the_scoreis calculatedmax_inner_product(query, vector) + 1.
Although they are conceptually related, the similarity parameter is different from text field similarity and accepts a distinct set of options.
index_options-
(Optional², object) An optional section that configures the kNN indexing algorithm. The HNSW algorithm has two internal parameters that influence how the data structure is built. These can be adjusted to improve the accuracy of results, at the expense of slower indexing speed.
Properties of index_options
type(Required, string) The type of kNN algorithm to use. Can be either any of:
hnsw- This utilizes the HNSW algorithm for scalable approximate kNN search. This supports allelement_typevalues.int8_hnsw- The default index type for some float vectors:- Stack Default for float vectors with less than 384 dimensions.
-
Stack
Default for float all vectors.
This utilizes the HNSW algorithm in addition to automatically scalar quantization for scalable approximate kNN search with
element_typeoffloat. This can reduce the memory footprint by 4x at the cost of some accuracy. See Automatically quantize vectors for kNN search.
int4_hnsw- This utilizes the HNSW algorithm in addition to automatically scalar quantization for scalable approximate kNN search withelement_typeoffloat. This can reduce the memory footprint by 8x at the cost of some accuracy. See Automatically quantize vectors for kNN search.bbq_hnsw- This utilizes the HNSW algorithm in addition to automatically binary quantization for scalable approximate kNN search withelement_typeoffloat. This can reduce the memory footprint by 32x at the cost of accuracy. See Automatically quantize vectors for kNN search.
Stack
bbq_hnswis the default index type for float vectors with greater than or equal to 384 dimensions.flat- This utilizes a brute-force search algorithm for exact kNN search. This supports allelement_typevalues.int8_flat- This utilizes a brute-force search algorithm in addition to automatic scalar quantization. Only supportselement_typeoffloat.int4_flat- This utilizes a brute-force search algorithm in addition to automatic half-byte scalar quantization. Only supportselement_typeoffloat.bbq_flat- This utilizes a brute-force search algorithm in addition to automatic binary quantization. Only supportselement_typeoffloat.-
Stack
bbq_disk- This utilizes a variant of k-means clustering algorithm in addition to automatic binary quantization to partition vectors and search subspaces rather than an entire graph structure as in with HNSW. Only supportselement_typeoffloat. This combines the benefits of BBQ quantization with partitioning to further reduces the required memory overhead when compared with HNSW and can effectively be run at the smallest possible RAM and heap sizes when HNSW would otherwise cause swapping and grind to a halt. DiskBBQ largely scales linearly with the total RAM. And search performance is enhanced at scale as a subset of the total vector space is loaded.
m- (Optional, integer) The number of neighbors each node will be connected to in the HNSW graph. Defaults to
16. Only applicable tohnsw,int8_hnsw,int4_hnswandbbq_hnswindex types. ef_construction- (Optional, integer) The number of candidates to track while assembling the list of nearest neighbors for each new node. Defaults to
100. Only applicable tohnsw,int8_hnsw,int4_hnswandbbq_hnswindex types. confidence_interval- (Optional, float) Only applicable to
int8_hnsw,int4_hnsw,int8_flat, andint4_flatindex types. The confidence interval to use when quantizing the vectors. Can be any value between and including0.90and1.0or exactly0. When the value is0, this indicates that dynamic quantiles should be calculated for optimized quantization. When between0.90and1.0, this value restricts the values used when calculating the quantization thresholds. For example, a value of0.95will only use the middle 95% of the values when calculating the quantization thresholds (e.g. the highest and lowest 2.5% of values will be ignored). Defaults to1/(dims + 1)forint8quantized vectors and0forint4for dynamic quantile calculation. default_visit_percentageStack- (Optional, integer) Only applicable to
bbq_disk. Must be between 0 and 100. 0 will default to usingnum_candidatesfor calculating the percent visited. Increasingdefault_visit_percentagetends to improve the accuracy of the final results. Defaults to ~1% per shard for every 1 million vectors. cluster_sizeStack- (Optional, integer) Only applicable to
bbq_disk. The number of vectors per cluster. Smaller cluster sizes increases accuracy at the cost of performance. Defaults to384. Must be a value between64and65536. rescore_vectorStack- (Optional, object) An optional section that configures automatic vector rescoring on knn queries for the given field. Only applicable to quantized index types.
Properties of rescore_vector
oversample(required, float) The amount to oversample the search results by. This value should be one of the following:
- Greater than
1.0and less than10.0 - Exactly
0to indicate no oversampling and rescoring should occur Stack
- Greater than
- The higher the value, the more vectors will be gathered and rescored with the raw values per shard.
- In case a knn query specifies a
rescore_vectorparameter, the queryrescore_vectorparameter will be used instead. - See oversampling and rescoring quantized vectors for details.
dense_vector fields support synthetic _source .
When using element_type: bit, this will treat all vectors as bit vectors. Bit vectors utilize only a single bit per dimension and are internally encoded as bytes. This can be useful for very high-dimensional vectors or models.
When using bit, the number of dimensions must be a multiple of 8 and must represent the number of bits. Additionally, with bit vectors, the typical vector similarity values are effectively all scored the same, e.g. with hamming distance.
Let’s compare two byte[] arrays, each representing 40 individual bits.
[-127, 0, 1, 42, 127] in bits 1000000100000000000000010010101001111111 [127, -127, 0, 1, 42] in bits 0111111110000001000000000000000100101010
When comparing these two bit, vectors, we first take the hamming distance.
xor result:
1000000100000000000000010010101001111111
^
0111111110000001000000000000000100101010
=
1111111010000001000000010010101101010101
Then, we gather the count of 1 bits in the xor result: 18. To scale for scoring, we subtract from the total number of bits and divide by the total number of bits: (40 - 18) / 40 = 0.55. This would be the _score between these two vectors.
Here is an example of indexing and searching bit vectors:
PUT my-bit-vectors
{
"mappings": {
"properties": {
"my_vector": {
"type": "dense_vector",
"dims": 40,
"element_type": "bit"
}
}
}
}
- The number of dimensions that represents the number of bits
POST /my-bit-vectors/_bulk?refresh
{"index": {"_id" : "1"}}
{"my_vector": [127, -127, 0, 1, 42]}
{"index": {"_id" : "2"}}
{"my_vector": "8100012a7f"}
- 5 bytes representing the 40 bit dimensioned vector
- A hexidecimal string representing the 40 bit dimensioned vector
Then, when searching, you can use the knn query to search for similar bit vectors:
POST /my-bit-vectors/_search?filter_path=hits.hits
{
"query": {
"knn": {
"query_vector": [127, -127, 0, 1, 42],
"field": "my_vector"
}
}
}
{
"hits": {
"hits": [
{
"_index": "my-bit-vectors",
"_id": "1",
"_score": 1,
"_source": {
"my_vector": [
127,
-127,
0,
1,
42
]
}
},
{
"_index": "my-bit-vectors",
"_id": "2",
"_score": 0.55,
"_source": {
"my_vector": "8100012a7f"
}
}
]
}
}
To better accommodate scaling and performance needs, updating the type setting in index_options is possible with the Update Mapping API, according to the following graph (jumps allowed):
flat --> int8_flat --> int4_flat --> bbq_flat --> hnsw --> int8_hnsw --> int4_hnsw --> bbq_hnsw
flat --> int8_flat --> int4_flat --> hnsw --> int8_hnsw --> int4_hnsw
For updating all HNSW types (hnsw, int8_hnsw, int4_hnsw, bbq_hnsw) the number of connections m must either stay the same or increase. For the scalar quantized formats int8_flat, int4_flat, int8_hnsw and int4_hnsw the confidence_interval must always be consistent (once defined, it cannot change).
Updating type in index_options will fail in all other scenarios.
Switching types won’t re-index vectors that have already been indexed (they will keep using their original type), vectors being indexed after the change will use the new type instead.
For example, it’s possible to define a dense vector field that utilizes the flat type (raw float32 arrays) for a first batch of data to be indexed.
PUT my-index-000001
{
"mappings": {
"properties": {
"text_embedding": {
"type": "dense_vector",
"dims": 384,
"index_options": {
"type": "flat"
}
}
}
}
}
Changing the type to int4_hnsw makes sure vectors indexed after the change will use an int4 scalar quantized representation and HNSW (e.g., for KNN queries). That includes new segments created by merging previously created segments.
PUT /my-index-000001/_mapping
{
"properties": {
"text_embedding": {
"type": "dense_vector",
"dims": 384,
"index_options": {
"type": "int4_hnsw"
}
}
}
}
Vectors indexed before this change will keep using the flat type (raw float32 representation and brute force search for KNN queries).
In order to have all the vectors updated to the new type, either reindexing or force merging should be used.
For debugging purposes, it’s possible to inspect how many segments (and docs) exist for each type with the Index Segments API.