langchain_nomic.embeddings.NomicEmbeddings¶

class langchain_nomic.embeddings.NomicEmbeddings(*, model: str, nomic_api_key: Optional[str] = ..., dimensionality: Optional[int] = ..., inference_mode: Literal['remote'] = ...)[source]¶
class langchain_nomic.embeddings.NomicEmbeddings(*, model: str, nomic_api_key: Optional[str] = ..., dimensionality: Optional[int] = ..., inference_mode: Literal['local', 'dynamic'], device: Optional[str] = ...)
class langchain_nomic.embeddings.NomicEmbeddings(*, model: str, nomic_api_key: Optional[str] = ..., dimensionality: Optional[int] = ..., inference_mode: str, device: Optional[str] = ...)

NomicEmbeddings embedding model.

Example

from langchain_nomic import NomicEmbeddings

model = NomicEmbeddings()

Initialize NomicEmbeddings model.

Parameters
  • model (str) – model name

  • nomic_api_key (Optional[str]) – optionally, set the Nomic API key. Uses the NOMIC_API_KEY environment variable by default.

  • dimensionality (Optional[int]) – The embedding dimension, for use with Matryoshka-capable models. Defaults to full-size.

  • inference_mode (str) – How to generate embeddings. One of remote, local (Embed4All), or dynamic (automatic). Defaults to remote.

  • device (Optional[str]) – The device to use for local embeddings. Choices include cpu, gpu, nvidia, amd, or a specific device name. See the docstring for GPT4All.__init__ for more info. Typically defaults to CPU. Do not use on macOS.

  • vision_model (Optional[str]) –

Methods

__init__()

Initialize NomicEmbeddings model.

aembed_documents(texts)

Asynchronous Embed search docs.

aembed_query(text)

Asynchronous Embed query text.

embed(texts, *, task_type)

Embed texts.

embed_documents(texts)

Embed search docs.

embed_image(uris)

embed_query(text)

Embed query text.

__init__(*, model: str, nomic_api_key: Optional[str] = None, dimensionality: Optional[int] = None, inference_mode: Literal['remote'] = 'remote')[source]¶
__init__(*, model: str, nomic_api_key: Optional[str] = None, dimensionality: Optional[int] = None, inference_mode: Literal['local', 'dynamic'], device: Optional[str] = None)
__init__(*, model: str, nomic_api_key: Optional[str] = None, dimensionality: Optional[int] = None, inference_mode: str, device: Optional[str] = None)

Initialize NomicEmbeddings model.

Parameters
  • model – model name

  • nomic_api_key – optionally, set the Nomic API key. Uses the NOMIC_API_KEY environment variable by default.

  • dimensionality – The embedding dimension, for use with Matryoshka-capable models. Defaults to full-size.

  • inference_mode – How to generate embeddings. One of remote, local (Embed4All), or dynamic (automatic). Defaults to remote.

  • device – The device to use for local embeddings. Choices include cpu, gpu, nvidia, amd, or a specific device name. See the docstring for GPT4All.__init__ for more info. Typically defaults to CPU. Do not use on macOS.

async aembed_documents(texts: List[str]) List[List[float]]¶

Asynchronous Embed search docs.

Parameters

texts (List[str]) –

Return type

List[List[float]]

async aembed_query(text: str) List[float]¶

Asynchronous Embed query text.

Parameters

text (str) –

Return type

List[float]

embed(texts: List[str], *, task_type: str) List[List[float]][source]¶

Embed texts.

Parameters
  • texts (List[str]) – list of texts to embed

  • task_type (str) – the task type to use when embedding. One of search_query, search_document, classification, clustering

Return type

List[List[float]]

embed_documents(texts: List[str]) List[List[float]][source]¶

Embed search docs.

Parameters

texts (List[str]) – list of texts to embed as documents

Return type

List[List[float]]

embed_image(uris: List[str]) List[List[float]][source]¶
Parameters

uris (List[str]) –

Return type

List[List[float]]

embed_query(text: str) List[float][source]¶

Embed query text.

Parameters

text (str) – query text

Return type

List[float]