From 953a5a4a2d8eb0e8350dfd3e3f903c5608eed4fe Mon Sep 17 00:00:00 2001 From: Parshva Daftari <89991302+parshvadaftari@users.noreply.github.com> Date: Tue, 25 Mar 2025 00:34:21 +0530 Subject: [PATCH] Azure openai fixes (#2428) --- docs/components/llms/models/azure_openai.mdx | 3 +++ docs/components/vectordbs/dbs/pinecone.mdx | 2 +- mem0/llms/azure_openai.py | 16 ++++++++++++---- tests/vector_stores/test_pinecone.py | 2 +- 4 files changed, 17 insertions(+), 6 deletions(-) diff --git a/docs/components/llms/models/azure_openai.mdx b/docs/components/llms/models/azure_openai.mdx index bacb8298..ede726b6 100644 --- a/docs/components/llms/models/azure_openai.mdx +++ b/docs/components/llms/models/azure_openai.mdx @@ -4,6 +4,9 @@ title: Azure OpenAI To use Azure OpenAI models, you have to set the `LLM_AZURE_OPENAI_API_KEY`, `LLM_AZURE_ENDPOINT`, `LLM_AZURE_DEPLOYMENT` and `LLM_AZURE_API_VERSION` environment variables. You can obtain the Azure API key from the [Azure](https://azure.microsoft.com/). +> **Note**: The following are currently unsupported with reasoning models `Parallel tool calling`,`temperature`, `top_p`, `presence_penalty`, `frequency_penalty`, `logprobs`, `top_logprobs`, `logit_bias`, `max_tokens` + + ## Usage ```python diff --git a/docs/components/vectordbs/dbs/pinecone.mdx b/docs/components/vectordbs/dbs/pinecone.mdx index 8eeb625f..84673f18 100644 --- a/docs/components/vectordbs/dbs/pinecone.mdx +++ b/docs/components/vectordbs/dbs/pinecone.mdx @@ -1,6 +1,6 @@ [Pinecone](https://www.pinecone.io/) is a fully managed vector database designed for machine learning applications, offering high performance vector search with low latency at scale. It's particularly well-suited for semantic search, recommendation systems, and other AI-powered applications. -> **Note**: Before configuring Pinecone, you need to select an embedding model (e.g., OpenAI, Cohere, or custom models) and ensure the `embedding_model_dims` in your config matches your chosen model's dimensions. For example, OpenAI's text-embedding-ada-002 uses 1536 dimensions. +> **Note**: Before configuring Pinecone, you need to select an embedding model (e.g., OpenAI, Cohere, or custom models) and ensure the `embedding_model_dims` in your config matches your chosen model's dimensions. For example, OpenAI's text-embedding-3-small uses 1536 dimensions. ### Usage diff --git a/mem0/llms/azure_openai.py b/mem0/llms/azure_openai.py index 3400b382..a7f1fdaf 100644 --- a/mem0/llms/azure_openai.py +++ b/mem0/llms/azure_openai.py @@ -80,13 +80,21 @@ class AzureOpenAILLM(LLMBase): Returns: str: The generated response. """ - params = { + + common_params = { "model": self.config.model, "messages": messages, - "temperature": self.config.temperature, - "max_tokens": self.config.max_tokens, - "top_p": self.config.top_p, } + + if self.config.model in {"o3-mini", "o1-preview", "o1"}: + params = common_params + else: + params = { + **common_params, + "temperature": self.config.temperature, + "max_tokens": self.config.max_tokens, + "top_p": self.config.top_p, + } if response_format: params["response_format"] = response_format if tools: # TODO: Remove tools if no issues found with new memory addition logic diff --git a/tests/vector_stores/test_pinecone.py b/tests/vector_stores/test_pinecone.py index c7e796d5..2ff5410d 100644 --- a/tests/vector_stores/test_pinecone.py +++ b/tests/vector_stores/test_pinecone.py @@ -67,7 +67,7 @@ def test_insert_vectors(pinecone_db): def test_search_vectors(pinecone_db): pinecone_db.index.query.return_value.matches = [{"id": "id1", "score": 0.9, "metadata": {"name": "vector1"}}] - results = pinecone_db.search([0.1] * 128, limit=1) + results = pinecone_db.search("test query",[0.1] * 128, limit=1) assert len(results) == 1 assert results[0].id == "id1" assert results[0].score == 0.9