langchain_community.embeddings.ollama.OllamaEmbeddings¶

class langchain_community.embeddings.ollama.OllamaEmbeddings[source]¶

Bases: BaseModel, Embeddings

Ollama locally runs large language models.

To use, follow the instructions at https://ollama.ai/.

Example

from langchain_community.embeddings import OllamaEmbeddings
ollama_emb = OllamaEmbeddings(
    model="llama:7b",
)
r1 = ollama_emb.embed_documents(
    [
        "Alpha is the first letter of Greek alphabet",
        "Beta is the second letter of Greek alphabet",
    ]
)
r2 = ollama_emb.embed_query(
    "What is the second letter of Greek alphabet"
)

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param base_url: str = 'http://localhost:11434'¶

Base url the model is hosted under.

param embed_instruction: str = 'passage: '¶

Instruction used to embed documents.

param headers: Optional[dict] = None¶

Additional headers to pass to endpoint (e.g. Authorization, Referer). This is useful when Ollama is hosted on cloud services that require tokens for authentication.

param mirostat: Optional[int] = None¶

Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0)

param mirostat_eta: Optional[float] = None¶

Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)

param mirostat_tau: Optional[float] = None¶

Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)

param model: str = 'llama2'¶

Model name to use.

param model_kwargs: Optional[dict] = None¶

Other model keyword args

param num_ctx: Optional[int] = None¶

Sets the size of the context window used to generate the next token. (Default: 2048)

param num_gpu: Optional[int] = None¶

The number of GPUs to use. On macOS it defaults to 1 to enable metal support, 0 to disable.

param num_thread: Optional[int] = None¶

Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores).

param query_instruction: str = 'query: '¶

Instruction used to embed the query.

param repeat_last_n: Optional[int] = None¶

Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)

param repeat_penalty: Optional[float] = None¶

Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1)

param show_progress: bool = False¶

Whether to show a tqdm progress bar. Must have tqdm installed.

param stop: Optional[List[str]] = None¶

Sets the stop tokens to use.

param temperature: Optional[float] = None¶

The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)

param tfs_z: Optional[float] = None¶

Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)

param top_k: Optional[int] = None¶

Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)

param top_p: Optional[float] = None¶

Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)

async aembed_documents(texts: List[str]) List[List[float]]¶

Asynchronous Embed search docs.

Parameters

texts (List[str]) –

Return type

List[List[float]]

async aembed_query(text: str) List[float]¶

Asynchronous Embed query text.

Parameters

text (str) –

Return type

List[float]

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model¶

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

Parameters
  • _fields_set (Optional[SetStr]) –

  • values (Any) –

Return type

Model

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model¶

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model

  • exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include

  • update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep (bool) – set to True to make a deep copy of the model

  • self (Model) –

Returns

new model instance

Return type

Model

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny¶

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters
  • include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –

  • exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –

  • by_alias (bool) –

  • skip_defaults (Optional[bool]) –

  • exclude_unset (bool) –

  • exclude_defaults (bool) –

  • exclude_none (bool) –

Return type

DictStrAny

embed_documents(texts: List[str]) List[List[float]][source]¶

Embed documents using an Ollama deployed embedding model.

Parameters

texts (List[str]) – The list of texts to embed.

Returns

List of embeddings, one for each text.

Return type

List[List[float]]

embed_query(text: str) List[float][source]¶

Embed a query using a Ollama deployed embedding model.

Parameters

text (str) – The text to embed.

Returns

Embeddings for the text.

Return type

List[float]

classmethod from_orm(obj: Any) Model¶
Parameters

obj (Any) –

Return type

Model

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode¶

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

Parameters
  • include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –

  • exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –

  • by_alias (bool) –

  • skip_defaults (Optional[bool]) –

  • exclude_unset (bool) –

  • exclude_defaults (bool) –

  • exclude_none (bool) –

  • encoder (Optional[Callable[[Any], Any]]) –

  • models_as_dict (bool) –

  • dumps_kwargs (Any) –

Return type

unicode

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model¶
Parameters
  • path (Union[str, Path]) –

  • content_type (unicode) –

  • encoding (unicode) –

  • proto (Protocol) –

  • allow_pickle (bool) –

Return type

Model

classmethod parse_obj(obj: Any) Model¶
Parameters

obj (Any) –

Return type

Model

classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model¶
Parameters
  • b (Union[str, bytes]) –

  • content_type (unicode) –

  • encoding (unicode) –

  • proto (Protocol) –

  • allow_pickle (bool) –

Return type

Model

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny¶
Parameters
  • by_alias (bool) –

  • ref_template (unicode) –

Return type

DictStrAny

classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode¶
Parameters
  • by_alias (bool) –

  • ref_template (unicode) –

  • dumps_kwargs (Any) –

Return type

unicode

classmethod update_forward_refs(**localns: Any) None¶

Try to update ForwardRefs on fields based on this Model, globalns and localns.

Parameters

localns (Any) –

Return type

None

classmethod validate(value: Any) Model¶
Parameters

value (Any) –

Return type

Model

Examples using OllamaEmbeddings¶