langchain.vectorstores.zilliz.Zilliz

class langchain.vectorstores.zilliz.Zilliz(embedding_function: Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False, *, primary_field: str = 'pk', text_field: str = 'text', vector_field: str = 'vector')[source]

Zilliz vector store.

You need to have pymilvus installed and a running Zilliz database.

See the following documentation for how to run a Zilliz instance: https://docs.zilliz.com/docs/create-cluster

IF USING L2/IP metric IT IS HIGHLY SUGGESTED TO NORMALIZE YOUR DATA.

Parameters
  • embedding_function (Embeddings) – Function used to embed the text.

  • collection_name (str) – Which Zilliz collection to use. Defaults to “LangChainCollection”.

  • connection_args (Optional[dict[str, any]]) – The connection args used for this class comes in the form of a dict.

  • consistency_level (str) – The consistency level to use for a collection. Defaults to “Session”.

  • index_params (Optional[dict]) – Which index params to use. Defaults to HNSW/AUTOINDEX depending on service.

  • search_params (Optional[dict]) – Which search params to use. Defaults to default of index.

  • drop_old (Optional[bool]) – Whether to drop the current collection. Defaults to False.

The connection args used for this class comes in the form of a dict, here are a few of the options:

address (str): The actual address of Zilliz

instance. Example address: “localhost:19530”

uri (str): The uri of Zilliz instance. Example uri:

https://in03-ba4234asae.api.gcp-us-west1.zillizcloud.com”,

host (str): The host of Zilliz instance. Default at “localhost”,

PyMilvus will fill in the default host if only port is provided.

port (str/int): The port of Zilliz instance. Default at 19530, PyMilvus

will fill in the default port if only host is provided.

user (str): Use which user to connect to Zilliz instance. If user and

password are provided, we will add related header in every RPC call.

password (str): Required when user is provided. The password

corresponding to the user.

token (str): API key, for serverless clusters which can be used as

replacements for user and password.

secure (bool): Default is false. If set to true, tls will be enabled. client_key_path (str): If use tls two-way authentication, need to

write the client.key path.

client_pem_path (str): If use tls two-way authentication, need to

write the client.pem path.

ca_pem_path (str): If use tls two-way authentication, need to write

the ca.pem path.

server_pem_path (str): If use tls one-way authentication, need to

write the server.pem path.

server_name (str): If use tls, need to write the common name.

Example


from langchain.vectorstores import Zilliz from langchain.embeddings import OpenAIEmbeddings

embedding = OpenAIEmbeddings() # Connect to a Zilliz instance milvus_store = Milvus(

embedding_function = embedding, collection_name = “LangChainCollection”, connection_args = {

“uri”: “https://in03-ba4234asae.api.gcp-us-west1.zillizcloud.com”, “user”: “temp”, “password”: “temp”, “token”: “temp”, # API key as replacements for user and password “secure”: True

} drop_old: True,

)

Raises

ValueError – If the pymilvus python package is not installed.

Initialize the Milvus vector store.

Attributes

embeddings

Access the query embedding object if available.

Methods

__init__(embedding_function[, ...])

Initialize the Milvus vector store.

aadd_documents(documents, **kwargs)

Run more documents through the embeddings and add to the vectorstore.

aadd_texts(texts[, metadatas])

Run more texts through the embeddings and add to the vectorstore.

add_documents(documents, **kwargs)

Run more documents through the embeddings and add to the vectorstore.

add_texts(texts[, metadatas, timeout, ...])

Insert text data into Milvus.

adelete([ids])

Delete by vector ID or other criteria.

afrom_documents(documents, embedding, **kwargs)

Return VectorStore initialized from documents and embeddings.

afrom_texts(texts, embedding[, metadatas])

Return VectorStore initialized from texts and embeddings.

amax_marginal_relevance_search(query[, k, ...])

Return docs selected using the maximal marginal relevance.

amax_marginal_relevance_search_by_vector(...)

Return docs selected using the maximal marginal relevance.

as_retriever(**kwargs)

Return VectorStoreRetriever initialized from this VectorStore.

asearch(query, search_type, **kwargs)

Return docs most similar to query using specified search type.

asimilarity_search(query[, k])

Return docs most similar to query.

asimilarity_search_by_vector(embedding[, k])

Return docs most similar to embedding vector.

asimilarity_search_with_relevance_scores(query)

Return docs and relevance scores in the range [0, 1], asynchronously.

asimilarity_search_with_score(*args, **kwargs)

Run similarity search with distance asynchronously.

delete([ids])

Delete by vector ID or other criteria.

from_documents(documents, embedding, **kwargs)

Return VectorStore initialized from documents and embeddings.

from_texts(texts, embedding[, metadatas, ...])

Create a Zilliz collection, indexes it with HNSW, and insert data.

max_marginal_relevance_search(query[, k, ...])

Perform a search and return results that are reordered by MMR.

max_marginal_relevance_search_by_vector(...)

Perform a search and return results that are reordered by MMR.

search(query, search_type, **kwargs)

Return docs most similar to query using specified search type.

similarity_search(query[, k, param, expr, ...])

Perform a similarity search against the query string.

similarity_search_by_vector(embedding[, k, ...])

Perform a similarity search against the query string.

similarity_search_with_relevance_scores(query)

Return docs and relevance scores in the range [0, 1].

similarity_search_with_score(query[, k, ...])

Perform a search on a query string and return results with score.

similarity_search_with_score_by_vector(embedding)

Perform a search on a query string and return results with score.

__init__(embedding_function: Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False, *, primary_field: str = 'pk', text_field: str = 'text', vector_field: str = 'vector')

Initialize the Milvus vector store.

async aadd_documents(documents: List[Document], **kwargs: Any) List[str]

Run more documents through the embeddings and add to the vectorstore.

Parameters

(List[Document] (documents) – Documents to add to the vectorstore.

Returns

List of IDs of the added texts.

Return type

List[str]

async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) List[str]

Run more texts through the embeddings and add to the vectorstore.

add_documents(documents: List[Document], **kwargs: Any) List[str]

Run more documents through the embeddings and add to the vectorstore.

Parameters

(List[Document] (documents) – Documents to add to the vectorstore.

Returns

List of IDs of the added texts.

Return type

List[str]

add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, timeout: Optional[int] = None, batch_size: int = 1000, **kwargs: Any) List[str]

Insert text data into Milvus.

Inserting data when the collection has not be made yet will result in creating a new Collection. The data of the first entity decides the schema of the new collection, the dim is extracted from the first embedding and the columns are decided by the first metadata dict. Metada keys will need to be present for all inserted values. At the moment there is no None equivalent in Milvus.

Parameters
  • texts (Iterable[str]) – The texts to embed, it is assumed that they all fit in memory.

  • metadatas (Optional[List[dict]]) – Metadata dicts attached to each of the texts. Defaults to None.

  • timeout (Optional[int]) – Timeout for each batch insert. Defaults to None.

  • batch_size (int, optional) – Batch size to use for insertion. Defaults to 1000.

Raises

MilvusException – Failure to add texts

Returns

The resulting keys for each inserted element.

Return type

List[str]

async adelete(ids: Optional[List[str]] = None, **kwargs: Any) Optional[bool]

Delete by vector ID or other criteria.

Parameters
  • ids – List of ids to delete.

  • **kwargs – Other keyword arguments that subclasses might use.

Returns

True if deletion is successful, False otherwise, None if not implemented.

Return type

Optional[bool]

async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) VST

Return VectorStore initialized from documents and embeddings.

async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) VST

Return VectorStore initialized from texts and embeddings.

Return docs selected using the maximal marginal relevance.

async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) List[Document]

Return docs selected using the maximal marginal relevance.

as_retriever(**kwargs: Any) VectorStoreRetriever

Return VectorStoreRetriever initialized from this VectorStore.

Parameters
  • search_type (Optional[str]) – Defines the type of search that the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”.

  • search_kwargs (Optional[Dict]) –

    Keyword arguments to pass to the search function. Can include things like:

    k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold

    for similarity_score_threshold

    fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR;

    1 for minimum diversity and 0 for maximum. (Default: 0.5)

    filter: Filter by document metadata

Returns

Retriever class for VectorStore.

Return type

VectorStoreRetriever

Examples:

# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
    search_type="mmr",
    search_kwargs={'k': 6, 'lambda_mult': 0.25}
)

# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
    search_type="mmr",
    search_kwargs={'k': 5, 'fetch_k': 50}
)

# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={'score_threshold': 0.8}
)

# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})

# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
    search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) List[Document]

Return docs most similar to query using specified search type.

Return docs most similar to query.

async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) List[Document]

Return docs most similar to embedding vector.

async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) List[Tuple[Document, float]]

Return docs and relevance scores in the range [0, 1], asynchronously.

0 is dissimilar, 1 is most similar.

Parameters
  • query – input text

  • k – Number of Documents to return. Defaults to 4.

  • **kwargs

    kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to

    filter the resulting set of retrieved docs

Returns

List of Tuples of (doc, similarity_score)

async asimilarity_search_with_score(*args: Any, **kwargs: Any) List[Tuple[Document, float]]

Run similarity search with distance asynchronously.

delete(ids: Optional[List[str]] = None, **kwargs: Any) Optional[bool]

Delete by vector ID or other criteria.

Parameters
  • ids – List of ids to delete.

  • **kwargs – Other keyword arguments that subclasses might use.

Returns

True if deletion is successful, False otherwise, None if not implemented.

Return type

Optional[bool]

classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) VST

Return VectorStore initialized from documents and embeddings.

classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: Optional[Dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) Zilliz[source]

Create a Zilliz collection, indexes it with HNSW, and insert data.

Parameters
  • texts (List[str]) – Text data.

  • embedding (Embeddings) – Embedding function.

  • metadatas (Optional[List[dict]]) – Metadata for each text if it exists. Defaults to None.

  • collection_name (str, optional) – Collection name to use. Defaults to “LangChainCollection”.

  • connection_args (dict[str, Any], optional) – Connection args to use. Defaults to DEFAULT_MILVUS_CONNECTION.

  • consistency_level (str, optional) – Which consistency level to use. Defaults to “Session”.

  • index_params (Optional[dict], optional) – Which index_params to use. Defaults to None.

  • search_params (Optional[dict], optional) – Which search params to use. Defaults to None.

  • drop_old (Optional[bool], optional) – Whether to drop the collection with that name if it exists. Defaults to False.

Returns

Zilliz Vector Store

Return type

Zilliz

Perform a search and return results that are reordered by MMR.

Parameters
  • query (str) – The text being searched.

  • k (int, optional) – How many results to give. Defaults to 4.

  • fetch_k (int, optional) – Total results to select k from. Defaults to 20.

  • lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5

  • param (dict, optional) – The search params for the specified index. Defaults to None.

  • expr (str, optional) – Filtering expression. Defaults to None.

  • timeout (int, optional) – How long to wait before timeout error. Defaults to None.

  • kwargs – Collection.search() keyword arguments.

Returns

Document results for search.

Return type

List[Document]

max_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) List[Document]

Perform a search and return results that are reordered by MMR.

Parameters
  • embedding (str) – The embedding vector being searched.

  • k (int, optional) – How many results to give. Defaults to 4.

  • fetch_k (int, optional) – Total results to select k from. Defaults to 20.

  • lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5

  • param (dict, optional) – The search params for the specified index. Defaults to None.

  • expr (str, optional) – Filtering expression. Defaults to None.

  • timeout (int, optional) – How long to wait before timeout error. Defaults to None.

  • kwargs – Collection.search() keyword arguments.

Returns

Document results for search.

Return type

List[Document]

search(query: str, search_type: str, **kwargs: Any) List[Document]

Return docs most similar to query using specified search type.

Perform a similarity search against the query string.

Parameters
  • query (str) – The text to search.

  • k (int, optional) – How many results to return. Defaults to 4.

  • param (dict, optional) – The search params for the index type. Defaults to None.

  • expr (str, optional) – Filtering expression. Defaults to None.

  • timeout (int, optional) – How long to wait before timeout error. Defaults to None.

  • kwargs – Collection.search() keyword arguments.

Returns

Document results for search.

Return type

List[Document]

similarity_search_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) List[Document]

Perform a similarity search against the query string.

Parameters
  • embedding (List[float]) – The embedding vector to search.

  • k (int, optional) – How many results to return. Defaults to 4.

  • param (dict, optional) – The search params for the index type. Defaults to None.

  • expr (str, optional) – Filtering expression. Defaults to None.

  • timeout (int, optional) – How long to wait before timeout error. Defaults to None.

  • kwargs – Collection.search() keyword arguments.

Returns

Document results for search.

Return type

List[Document]

similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) List[Tuple[Document, float]]

Return docs and relevance scores in the range [0, 1].

0 is dissimilar, 1 is most similar.

Parameters
  • query – input text

  • k – Number of Documents to return. Defaults to 4.

  • **kwargs

    kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to

    filter the resulting set of retrieved docs

Returns

List of Tuples of (doc, similarity_score)

similarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) List[Tuple[Document, float]]

Perform a search on a query string and return results with score.

For more information about the search parameters, take a look at the pymilvus documentation found here: https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md

Parameters
  • query (str) – The text being searched.

  • k (int, optional) – The amount of results to return. Defaults to 4.

  • param (dict) – The search params for the specified index. Defaults to None.

  • expr (str, optional) – Filtering expression. Defaults to None.

  • timeout (int, optional) – How long to wait before timeout error. Defaults to None.

  • kwargs – Collection.search() keyword arguments.

Return type

List[float], List[Tuple[Document, any, any]]

similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) List[Tuple[Document, float]]

Perform a search on a query string and return results with score.

For more information about the search parameters, take a look at the pymilvus documentation found here: https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md

Parameters
  • embedding (List[float]) – The embedding vector being searched.

  • k (int, optional) – The amount of results to return. Defaults to 4.

  • param (dict) – The search params for the specified index. Defaults to None.

  • expr (str, optional) – Filtering expression. Defaults to None.

  • timeout (int, optional) – How long to wait before timeout error. Defaults to None.

  • kwargs – Collection.search() keyword arguments.

Returns

Result doc and score.

Return type

List[Tuple[Document, float]]