autogen_ext.memory.chromadb#
- class ChromaDBVectorMemory(config: ChromaDBVectorMemoryConfig | None = None)[source]#
Bases:
Memory
,Component
[ChromaDBVectorMemoryConfig
]Store and retrieve memory using vector similarity search powered by ChromaDB.
ChromaDBVectorMemory provides a vector-based memory implementation that uses ChromaDB for storing and retrieving content based on semantic similarity. It enhances agents with the ability to recall contextually relevant information during conversations by leveraging vector embeddings to find similar content.
This implementation serves as a reference for more complex memory systems using vector embeddings. For advanced use cases requiring specialized formatting of retrieved content, users should extend this class and override the update_context() method.
This implementation requires the ChromaDB extra to be installed. Install with:
pip install "autogen-ext[chromadb]"
- Parameters:
config (ChromaDBVectorMemoryConfig | None) – Configuration for the ChromaDB memory. If None, defaults to a PersistentChromaDBVectorMemoryConfig with default values. Two config types are supported: * PersistentChromaDBVectorMemoryConfig: For local storage * HttpChromaDBVectorMemoryConfig: For connecting to a remote ChromaDB server
Example
import os import asyncio from pathlib import Path from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.ui import Console from autogen_core.memory import MemoryContent, MemoryMimeType from autogen_ext.memory.chromadb import ( ChromaDBVectorMemory, PersistentChromaDBVectorMemoryConfig, SentenceTransformerEmbeddingFunctionConfig, OpenAIEmbeddingFunctionConfig, ) from autogen_ext.models.openai import OpenAIChatCompletionClient def get_weather(city: str) -> str: return f"The weather in {city} is sunny with a high of 90°F and a low of 70°F." def fahrenheit_to_celsius(fahrenheit: float) -> float: return (fahrenheit - 32) * 5.0 / 9.0 async def main() -> None: # Use default embedding function default_memory = ChromaDBVectorMemory( config=PersistentChromaDBVectorMemoryConfig( collection_name="user_preferences", persistence_path=os.path.join(str(Path.home()), ".chromadb_autogen"), k=3, # Return top 3 results score_threshold=0.5, # Minimum similarity score ) ) # Using a custom SentenceTransformer model custom_memory = ChromaDBVectorMemory( config=PersistentChromaDBVectorMemoryConfig( collection_name="multilingual_memory", persistence_path=os.path.join(str(Path.home()), ".chromadb_autogen"), embedding_function_config=SentenceTransformerEmbeddingFunctionConfig( model_name="paraphrase-multilingual-mpnet-base-v2" ), ) ) # Using OpenAI embeddings openai_memory = ChromaDBVectorMemory( config=PersistentChromaDBVectorMemoryConfig( collection_name="openai_memory", persistence_path=os.path.join(str(Path.home()), ".chromadb_autogen"), embedding_function_config=OpenAIEmbeddingFunctionConfig( api_key=os.environ["OPENAI_API_KEY"], model_name="text-embedding-3-small" ), ) ) # Add user preferences to memory await openai_memory.add( MemoryContent( content="The user prefers weather temperatures in Celsius", mime_type=MemoryMimeType.TEXT, metadata={"category": "preferences", "type": "units"}, ) ) # Create assistant agent with ChromaDB memory assistant = AssistantAgent( name="assistant", model_client=OpenAIChatCompletionClient( model="gpt-4.1", ), tools=[ get_weather, fahrenheit_to_celsius, ], max_tool_iterations=10, memory=[openai_memory], ) # The memory will automatically retrieve relevant content during conversations await Console(assistant.run_stream(task="What's the temperature in New York?")) # Remember to close the memory when finished await default_memory.close() await custom_memory.close() await openai_memory.close() asyncio.run(main())
Output:
---------- TextMessage (user) ---------- What's the temperature in New York? ---------- MemoryQueryEvent (assistant) ---------- [MemoryContent(content='The user prefers weather temperatures in Celsius', mime_type='MemoryMimeType.TEXT', metadata={'type': 'units', 'category': 'preferences', 'mime_type': 'MemoryMimeType.TEXT', 'score': 0.3133561611175537, 'id': 'fb00506c-acf4-4174-93d7-2a942593f3f7'}), MemoryContent(content='The user prefers weather temperatures in Celsius', mime_type='MemoryMimeType.TEXT', metadata={'mime_type': 'MemoryMimeType.TEXT', 'category': 'preferences', 'type': 'units', 'score': 0.3133561611175537, 'id': '34311689-b419-4e1a-8bc4-09143f356c66'})] ---------- ToolCallRequestEvent (assistant) ---------- [FunctionCall(id='call_7TjsFd430J1aKwU5T2w8bvdh', arguments='{"city":"New York"}', name='get_weather')] ---------- ToolCallExecutionEvent (assistant) ---------- [FunctionExecutionResult(content='The weather in New York is sunny with a high of 90°F and a low of 70°F.', name='get_weather', call_id='call_7TjsFd430J1aKwU5T2w8bvdh', is_error=False)] ---------- ToolCallRequestEvent (assistant) ---------- [FunctionCall(id='call_RTjMHEZwDXtjurEYTjDlvq9c', arguments='{"fahrenheit": 90}', name='fahrenheit_to_celsius'), FunctionCall(id='call_3mMuCK1aqtzZPTqIHPoHKxtP', arguments='{"fahrenheit": 70}', name='fahrenheit_to_celsius')] ---------- ToolCallExecutionEvent (assistant) ---------- [FunctionExecutionResult(content='32.22222222222222', name='fahrenheit_to_celsius', call_id='call_RTjMHEZwDXtjurEYTjDlvq9c', is_error=False), FunctionExecutionResult(content='21.11111111111111', name='fahrenheit_to_celsius', call_id='call_3mMuCK1aqtzZPTqIHPoHKxtP', is_error=False)] ---------- TextMessage (assistant) ---------- The temperature in New York today is sunny with a high of about 32°C and a low of about 21°C.
- component_config_schema#
alias of
ChromaDBVectorMemoryConfig
- component_provider_override: ClassVar[str | None] = 'autogen_ext.memory.chromadb.ChromaDBVectorMemory'#
Override the provider string for the component. This should be used to prevent internal module names being a part of the module name.
- async update_context(model_context: ChatCompletionContext) UpdateContextResult [source]#
Update the provided model context using relevant memory content.
- Parameters:
model_context – The context to update.
- Returns:
UpdateContextResult containing relevant memories
- async add(content: MemoryContent, cancellation_token: CancellationToken | None = None) None [source]#
Add a new content to memory.
- Parameters:
content – The memory content to add
cancellation_token – Optional token to cancel operation
- async query(query: str | MemoryContent, cancellation_token: CancellationToken | None = None, **kwargs: Any) MemoryQueryResult [source]#
Query the memory store and return relevant entries.
- Parameters:
query – Query content item
cancellation_token – Optional token to cancel operation
**kwargs – Additional implementation-specific parameters
- Returns:
MemoryQueryResult containing memory entries with relevance scores
- pydantic model ChromaDBVectorMemoryConfig[source]#
Bases:
BaseModel
Base configuration for ChromaDB-based memory implementation.
Changed in version v0.4.1: Added support for custom embedding functions via embedding_function_config.
Show JSON schema
{ "title": "ChromaDBVectorMemoryConfig", "description": "Base configuration for ChromaDB-based memory implementation.\n\n.. versionchanged:: v0.4.1\n Added support for custom embedding functions via embedding_function_config.", "type": "object", "properties": { "client_type": { "enum": [ "persistent", "http" ], "title": "Client Type", "type": "string" }, "collection_name": { "default": "memory_store", "description": "Name of the ChromaDB collection", "title": "Collection Name", "type": "string" }, "distance_metric": { "default": "cosine", "description": "Distance metric for similarity search", "title": "Distance Metric", "type": "string" }, "k": { "default": 3, "description": "Number of results to return in queries", "title": "K", "type": "integer" }, "score_threshold": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "description": "Minimum similarity score threshold", "title": "Score Threshold" }, "allow_reset": { "default": false, "description": "Whether to allow resetting the ChromaDB client", "title": "Allow Reset", "type": "boolean" }, "tenant": { "default": "default_tenant", "description": "Tenant to use", "title": "Tenant", "type": "string" }, "database": { "default": "default_database", "description": "Database to use", "title": "Database", "type": "string" }, "embedding_function_config": { "description": "Configuration for the embedding function", "discriminator": { "mapping": { "default": "#/$defs/DefaultEmbeddingFunctionConfig", "openai": "#/$defs/OpenAIEmbeddingFunctionConfig", "sentence_transformer": "#/$defs/SentenceTransformerEmbeddingFunctionConfig" }, "propertyName": "function_type" }, "oneOf": [ { "$ref": "#/$defs/DefaultEmbeddingFunctionConfig" }, { "$ref": "#/$defs/SentenceTransformerEmbeddingFunctionConfig" }, { "$ref": "#/$defs/OpenAIEmbeddingFunctionConfig" } ], "title": "Embedding Function Config" } }, "$defs": { "DefaultEmbeddingFunctionConfig": { "description": "Configuration for the default ChromaDB embedding function.\n\nUses ChromaDB's default embedding function (Sentence Transformers all-MiniLM-L6-v2).\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.", "properties": { "function_type": { "const": "default", "default": "default", "title": "Function Type", "type": "string" } }, "title": "DefaultEmbeddingFunctionConfig", "type": "object" }, "OpenAIEmbeddingFunctionConfig": { "description": "Configuration for OpenAI embedding functions.\n\nUses OpenAI's embedding API for generating embeddings.\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.\n\nArgs:\n api_key (str): OpenAI API key. If empty, will attempt to use environment variable.\n model_name (str): OpenAI embedding model name. Defaults to \"text-embedding-ada-002\".\n\nExample:\n .. code-block:: python\n\n from autogen_ext.memory.chromadb import OpenAIEmbeddingFunctionConfig\n\n _ = OpenAIEmbeddingFunctionConfig(api_key=\"sk-...\", model_name=\"text-embedding-3-small\")", "properties": { "function_type": { "const": "openai", "default": "openai", "title": "Function Type", "type": "string" }, "api_key": { "default": "", "description": "OpenAI API key", "title": "Api Key", "type": "string" }, "model_name": { "default": "text-embedding-ada-002", "description": "OpenAI embedding model name", "title": "Model Name", "type": "string" } }, "title": "OpenAIEmbeddingFunctionConfig", "type": "object" }, "SentenceTransformerEmbeddingFunctionConfig": { "description": "Configuration for SentenceTransformer embedding functions.\n\nAllows specifying a custom SentenceTransformer model for embeddings.\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.\n\nArgs:\n model_name (str): Name of the SentenceTransformer model to use.\n Defaults to \"all-MiniLM-L6-v2\".\n\nExample:\n .. code-block:: python\n\n from autogen_ext.memory.chromadb import SentenceTransformerEmbeddingFunctionConfig\n\n _ = SentenceTransformerEmbeddingFunctionConfig(model_name=\"paraphrase-multilingual-mpnet-base-v2\")", "properties": { "function_type": { "const": "sentence_transformer", "default": "sentence_transformer", "title": "Function Type", "type": "string" }, "model_name": { "default": "all-MiniLM-L6-v2", "description": "SentenceTransformer model name to use", "title": "Model Name", "type": "string" } }, "title": "SentenceTransformerEmbeddingFunctionConfig", "type": "object" } }, "required": [ "client_type" ] }
- Fields:
allow_reset (bool)
client_type (Literal['persistent', 'http'])
collection_name (str)
database (str)
distance_metric (str)
embedding_function_config (autogen_ext.memory.chromadb._chroma_configs.DefaultEmbeddingFunctionConfig | autogen_ext.memory.chromadb._chroma_configs.SentenceTransformerEmbeddingFunctionConfig | autogen_ext.memory.chromadb._chroma_configs.OpenAIEmbeddingFunctionConfig | autogen_ext.memory.chromadb._chroma_configs.CustomEmbeddingFunctionConfig)
k (int)
score_threshold (float | None)
tenant (str)
- field embedding_function_config: Annotated[DefaultEmbeddingFunctionConfig | SentenceTransformerEmbeddingFunctionConfig | OpenAIEmbeddingFunctionConfig | CustomEmbeddingFunctionConfig, FieldInfo(annotation=NoneType, required=True, discriminator='function_type')] [Optional]#
Configuration for the embedding function
- pydantic model PersistentChromaDBVectorMemoryConfig[source]#
Bases:
ChromaDBVectorMemoryConfig
Configuration for persistent ChromaDB memory.
Show JSON schema
{ "title": "PersistentChromaDBVectorMemoryConfig", "description": "Configuration for persistent ChromaDB memory.", "type": "object", "properties": { "client_type": { "default": "persistent", "enum": [ "persistent", "http" ], "title": "Client Type", "type": "string" }, "collection_name": { "default": "memory_store", "description": "Name of the ChromaDB collection", "title": "Collection Name", "type": "string" }, "distance_metric": { "default": "cosine", "description": "Distance metric for similarity search", "title": "Distance Metric", "type": "string" }, "k": { "default": 3, "description": "Number of results to return in queries", "title": "K", "type": "integer" }, "score_threshold": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "description": "Minimum similarity score threshold", "title": "Score Threshold" }, "allow_reset": { "default": false, "description": "Whether to allow resetting the ChromaDB client", "title": "Allow Reset", "type": "boolean" }, "tenant": { "default": "default_tenant", "description": "Tenant to use", "title": "Tenant", "type": "string" }, "database": { "default": "default_database", "description": "Database to use", "title": "Database", "type": "string" }, "embedding_function_config": { "description": "Configuration for the embedding function", "discriminator": { "mapping": { "default": "#/$defs/DefaultEmbeddingFunctionConfig", "openai": "#/$defs/OpenAIEmbeddingFunctionConfig", "sentence_transformer": "#/$defs/SentenceTransformerEmbeddingFunctionConfig" }, "propertyName": "function_type" }, "oneOf": [ { "$ref": "#/$defs/DefaultEmbeddingFunctionConfig" }, { "$ref": "#/$defs/SentenceTransformerEmbeddingFunctionConfig" }, { "$ref": "#/$defs/OpenAIEmbeddingFunctionConfig" } ], "title": "Embedding Function Config" }, "persistence_path": { "default": "./chroma_db", "description": "Path for persistent storage", "title": "Persistence Path", "type": "string" } }, "$defs": { "DefaultEmbeddingFunctionConfig": { "description": "Configuration for the default ChromaDB embedding function.\n\nUses ChromaDB's default embedding function (Sentence Transformers all-MiniLM-L6-v2).\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.", "properties": { "function_type": { "const": "default", "default": "default", "title": "Function Type", "type": "string" } }, "title": "DefaultEmbeddingFunctionConfig", "type": "object" }, "OpenAIEmbeddingFunctionConfig": { "description": "Configuration for OpenAI embedding functions.\n\nUses OpenAI's embedding API for generating embeddings.\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.\n\nArgs:\n api_key (str): OpenAI API key. If empty, will attempt to use environment variable.\n model_name (str): OpenAI embedding model name. Defaults to \"text-embedding-ada-002\".\n\nExample:\n .. code-block:: python\n\n from autogen_ext.memory.chromadb import OpenAIEmbeddingFunctionConfig\n\n _ = OpenAIEmbeddingFunctionConfig(api_key=\"sk-...\", model_name=\"text-embedding-3-small\")", "properties": { "function_type": { "const": "openai", "default": "openai", "title": "Function Type", "type": "string" }, "api_key": { "default": "", "description": "OpenAI API key", "title": "Api Key", "type": "string" }, "model_name": { "default": "text-embedding-ada-002", "description": "OpenAI embedding model name", "title": "Model Name", "type": "string" } }, "title": "OpenAIEmbeddingFunctionConfig", "type": "object" }, "SentenceTransformerEmbeddingFunctionConfig": { "description": "Configuration for SentenceTransformer embedding functions.\n\nAllows specifying a custom SentenceTransformer model for embeddings.\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.\n\nArgs:\n model_name (str): Name of the SentenceTransformer model to use.\n Defaults to \"all-MiniLM-L6-v2\".\n\nExample:\n .. code-block:: python\n\n from autogen_ext.memory.chromadb import SentenceTransformerEmbeddingFunctionConfig\n\n _ = SentenceTransformerEmbeddingFunctionConfig(model_name=\"paraphrase-multilingual-mpnet-base-v2\")", "properties": { "function_type": { "const": "sentence_transformer", "default": "sentence_transformer", "title": "Function Type", "type": "string" }, "model_name": { "default": "all-MiniLM-L6-v2", "description": "SentenceTransformer model name to use", "title": "Model Name", "type": "string" } }, "title": "SentenceTransformerEmbeddingFunctionConfig", "type": "object" } } }
- Fields:
client_type (Literal['persistent', 'http'])
persistence_path (str)
- pydantic model HttpChromaDBVectorMemoryConfig[source]#
Bases:
ChromaDBVectorMemoryConfig
Configuration for HTTP ChromaDB memory.
Show JSON schema
{ "title": "HttpChromaDBVectorMemoryConfig", "description": "Configuration for HTTP ChromaDB memory.", "type": "object", "properties": { "client_type": { "default": "http", "enum": [ "persistent", "http" ], "title": "Client Type", "type": "string" }, "collection_name": { "default": "memory_store", "description": "Name of the ChromaDB collection", "title": "Collection Name", "type": "string" }, "distance_metric": { "default": "cosine", "description": "Distance metric for similarity search", "title": "Distance Metric", "type": "string" }, "k": { "default": 3, "description": "Number of results to return in queries", "title": "K", "type": "integer" }, "score_threshold": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "description": "Minimum similarity score threshold", "title": "Score Threshold" }, "allow_reset": { "default": false, "description": "Whether to allow resetting the ChromaDB client", "title": "Allow Reset", "type": "boolean" }, "tenant": { "default": "default_tenant", "description": "Tenant to use", "title": "Tenant", "type": "string" }, "database": { "default": "default_database", "description": "Database to use", "title": "Database", "type": "string" }, "embedding_function_config": { "description": "Configuration for the embedding function", "discriminator": { "mapping": { "default": "#/$defs/DefaultEmbeddingFunctionConfig", "openai": "#/$defs/OpenAIEmbeddingFunctionConfig", "sentence_transformer": "#/$defs/SentenceTransformerEmbeddingFunctionConfig" }, "propertyName": "function_type" }, "oneOf": [ { "$ref": "#/$defs/DefaultEmbeddingFunctionConfig" }, { "$ref": "#/$defs/SentenceTransformerEmbeddingFunctionConfig" }, { "$ref": "#/$defs/OpenAIEmbeddingFunctionConfig" } ], "title": "Embedding Function Config" }, "host": { "default": "localhost", "description": "Host of the remote server", "title": "Host", "type": "string" }, "port": { "default": 8000, "description": "Port of the remote server", "title": "Port", "type": "integer" }, "ssl": { "default": false, "description": "Whether to use HTTPS", "title": "Ssl", "type": "boolean" }, "headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "description": "Headers to send to the server", "title": "Headers" } }, "$defs": { "DefaultEmbeddingFunctionConfig": { "description": "Configuration for the default ChromaDB embedding function.\n\nUses ChromaDB's default embedding function (Sentence Transformers all-MiniLM-L6-v2).\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.", "properties": { "function_type": { "const": "default", "default": "default", "title": "Function Type", "type": "string" } }, "title": "DefaultEmbeddingFunctionConfig", "type": "object" }, "OpenAIEmbeddingFunctionConfig": { "description": "Configuration for OpenAI embedding functions.\n\nUses OpenAI's embedding API for generating embeddings.\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.\n\nArgs:\n api_key (str): OpenAI API key. If empty, will attempt to use environment variable.\n model_name (str): OpenAI embedding model name. Defaults to \"text-embedding-ada-002\".\n\nExample:\n .. code-block:: python\n\n from autogen_ext.memory.chromadb import OpenAIEmbeddingFunctionConfig\n\n _ = OpenAIEmbeddingFunctionConfig(api_key=\"sk-...\", model_name=\"text-embedding-3-small\")", "properties": { "function_type": { "const": "openai", "default": "openai", "title": "Function Type", "type": "string" }, "api_key": { "default": "", "description": "OpenAI API key", "title": "Api Key", "type": "string" }, "model_name": { "default": "text-embedding-ada-002", "description": "OpenAI embedding model name", "title": "Model Name", "type": "string" } }, "title": "OpenAIEmbeddingFunctionConfig", "type": "object" }, "SentenceTransformerEmbeddingFunctionConfig": { "description": "Configuration for SentenceTransformer embedding functions.\n\nAllows specifying a custom SentenceTransformer model for embeddings.\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.\n\nArgs:\n model_name (str): Name of the SentenceTransformer model to use.\n Defaults to \"all-MiniLM-L6-v2\".\n\nExample:\n .. code-block:: python\n\n from autogen_ext.memory.chromadb import SentenceTransformerEmbeddingFunctionConfig\n\n _ = SentenceTransformerEmbeddingFunctionConfig(model_name=\"paraphrase-multilingual-mpnet-base-v2\")", "properties": { "function_type": { "const": "sentence_transformer", "default": "sentence_transformer", "title": "Function Type", "type": "string" }, "model_name": { "default": "all-MiniLM-L6-v2", "description": "SentenceTransformer model name to use", "title": "Model Name", "type": "string" } }, "title": "SentenceTransformerEmbeddingFunctionConfig", "type": "object" } } }
- Fields:
client_type (Literal['persistent', 'http'])
headers (Dict[str, str] | None)
host (str)
port (int)
ssl (bool)
- pydantic model DefaultEmbeddingFunctionConfig[source]#
Bases:
BaseModel
Configuration for the default ChromaDB embedding function.
Uses ChromaDB’s default embedding function (Sentence Transformers all-MiniLM-L6-v2).
Added in version v0.4.1: Support for custom embedding functions in ChromaDB memory.
Show JSON schema
{ "title": "DefaultEmbeddingFunctionConfig", "description": "Configuration for the default ChromaDB embedding function.\n\nUses ChromaDB's default embedding function (Sentence Transformers all-MiniLM-L6-v2).\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.", "type": "object", "properties": { "function_type": { "const": "default", "default": "default", "title": "Function Type", "type": "string" } } }
- Fields:
function_type (Literal['default'])
- pydantic model SentenceTransformerEmbeddingFunctionConfig[source]#
Bases:
BaseModel
Configuration for SentenceTransformer embedding functions.
Allows specifying a custom SentenceTransformer model for embeddings.
Added in version v0.4.1: Support for custom embedding functions in ChromaDB memory.
- Parameters:
model_name (str) – Name of the SentenceTransformer model to use. Defaults to “all-MiniLM-L6-v2”.
Example
from autogen_ext.memory.chromadb import SentenceTransformerEmbeddingFunctionConfig _ = SentenceTransformerEmbeddingFunctionConfig(model_name="paraphrase-multilingual-mpnet-base-v2")
Show JSON schema
{ "title": "SentenceTransformerEmbeddingFunctionConfig", "description": "Configuration for SentenceTransformer embedding functions.\n\nAllows specifying a custom SentenceTransformer model for embeddings.\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.\n\nArgs:\n model_name (str): Name of the SentenceTransformer model to use.\n Defaults to \"all-MiniLM-L6-v2\".\n\nExample:\n .. code-block:: python\n\n from autogen_ext.memory.chromadb import SentenceTransformerEmbeddingFunctionConfig\n\n _ = SentenceTransformerEmbeddingFunctionConfig(model_name=\"paraphrase-multilingual-mpnet-base-v2\")", "type": "object", "properties": { "function_type": { "const": "sentence_transformer", "default": "sentence_transformer", "title": "Function Type", "type": "string" }, "model_name": { "default": "all-MiniLM-L6-v2", "description": "SentenceTransformer model name to use", "title": "Model Name", "type": "string" } } }
- Fields:
function_type (Literal['sentence_transformer'])
model_name (str)
- pydantic model OpenAIEmbeddingFunctionConfig[source]#
Bases:
BaseModel
Configuration for OpenAI embedding functions.
Uses OpenAI’s embedding API for generating embeddings.
Added in version v0.4.1: Support for custom embedding functions in ChromaDB memory.
- Parameters:
Example
from autogen_ext.memory.chromadb import OpenAIEmbeddingFunctionConfig _ = OpenAIEmbeddingFunctionConfig(api_key="sk-...", model_name="text-embedding-3-small")
Show JSON schema
{ "title": "OpenAIEmbeddingFunctionConfig", "description": "Configuration for OpenAI embedding functions.\n\nUses OpenAI's embedding API for generating embeddings.\n\n.. versionadded:: v0.4.1\n Support for custom embedding functions in ChromaDB memory.\n\nArgs:\n api_key (str): OpenAI API key. If empty, will attempt to use environment variable.\n model_name (str): OpenAI embedding model name. Defaults to \"text-embedding-ada-002\".\n\nExample:\n .. code-block:: python\n\n from autogen_ext.memory.chromadb import OpenAIEmbeddingFunctionConfig\n\n _ = OpenAIEmbeddingFunctionConfig(api_key=\"sk-...\", model_name=\"text-embedding-3-small\")", "type": "object", "properties": { "function_type": { "const": "openai", "default": "openai", "title": "Function Type", "type": "string" }, "api_key": { "default": "", "description": "OpenAI API key", "title": "Api Key", "type": "string" }, "model_name": { "default": "text-embedding-ada-002", "description": "OpenAI embedding model name", "title": "Model Name", "type": "string" } } }
- Fields:
api_key (str)
function_type (Literal['openai'])
model_name (str)
- pydantic model CustomEmbeddingFunctionConfig[source]#
Bases:
BaseModel
Configuration for custom embedding functions.
Allows using a custom function that returns a ChromaDB-compatible embedding function.
Added in version v0.4.1: Support for custom embedding functions in ChromaDB memory.
Warning
Configurations containing custom functions are not serializable.
- Parameters:
function (Callable) – Function that returns a ChromaDB-compatible embedding function.
params (Dict[str, Any]) – Parameters to pass to the function.
Show JSON schema
{ "title": "CustomEmbeddingFunctionConfig", "type": "object", "properties": { "function_type": { "const": "custom", "default": "custom", "title": "Function Type", "type": "string" }, "function": { "default": null, "title": "Function" }, "params": { "description": "Parameters to pass to the function", "title": "Params", "type": "object" } } }
- Fields:
function (Callable[[...], Any])
function_type (Literal['custom'])
params (Dict[str, Any])