langchain_community.embeddings.oci_generative_ai.OCIGenAIEmbeddings¶

class langchain_community.embeddings.oci_generative_ai.OCIGenAIEmbeddings[source]¶

Bases: BaseModel, Embeddings

OCI embedding models.

To authenticate, the OCI client uses the methods described in https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm

The authentifcation method is passed through auth_type and should be one of: API_KEY (default), SECURITY_TOKEN, INSTANCE_PRINCIPLE, RESOURCE_PRINCIPLE

Make sure you have the required policies (profile/roles) to access the OCI Generative AI service. If a specific config profile is used, you must pass the name of the profile (~/.oci/config) through auth_profile.

To use, you must provide the compartment id along with the endpoint url, and model id as named parameters to the constructor.

Example

from langchain.embeddings import OCIGenAIEmbeddings

embeddings = OCIGenAIEmbeddings(
    model_id="MY_EMBEDDING_MODEL",
    service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
    compartment_id="MY_OCID"
)

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param auth_profile: Optional[str] = 'DEFAULT'¶

The name of the profile in ~/.oci/config If not specified , DEFAULT will be used

param auth_type: Optional[str] = 'API_KEY'¶

Authentication type, could be

API_KEY, SECURITY_TOKEN, INSTANCE_PRINCIPLE, RESOURCE_PRINCIPLE

If not specified, API_KEY will be used

param batch_size: int = 96¶

Batch size of OCI GenAI embedding requests. OCI GenAI may handle up to 96 texts per request

param compartment_id: str = None¶

OCID of compartment

param model_id: str = None¶

Id of the model to call, e.g., cohere.embed-english-light-v2.0

param model_kwargs: Optional[Dict] = None¶

Keyword arguments to pass to the model

param service_endpoint: str = None¶

service endpoint url

param truncate: Optional[str] = 'END'¶

Truncate embeddings that are too long from start or end (“NONE”|”START”|”END”)

async aembed_documents(texts: List[str]) List[List[float]]¶

Asynchronous Embed search docs.

Parameters

texts (List[str]) – List of text to embed.

Returns

List of embeddings.

Return type

List[List[float]]

async aembed_query(text: str) List[float]¶

Asynchronous Embed query text.

Parameters

text (str) – Text to embed.

Returns

Embedding.

Return type

List[float]

embed_documents(texts: List[str]) List[List[float]][source]¶

Call out to OCIGenAI’s embedding endpoint.

Parameters

texts (List[str]) – The list of texts to embed.

Returns

List of embeddings, one for each text.

Return type

List[List[float]]

embed_query(text: str) List[float][source]¶

Call out to OCIGenAI’s embedding endpoint.

Parameters

text (str) – The text to embed.

Returns

Embeddings for the text.

Return type

List[float]

Examples using OCIGenAIEmbeddings¶