# Pinecone [Pinecone](https://www.pinecone.io/) is a fully managed vector database designed for machine learning applications, offering high performance vector search with low latency at scale. It's particularly well-suited for semantic search, recommendation systems, and other AI-powered applications. ### Usage ```python import os from mem0 import Memory os.environ["OPENAI_API_KEY"] = "sk-xx" os.environ["PINECONE_API_KEY"] = "your-api-key" config = { "vector_store": { "provider": "pinecone", "config": { "collection_name": "memory_index", "embedding_model_dims": 1536, "environment": "us-west1-gcp", "metric": "cosine" } } } m = Memory.from_config(config) messages = [ {"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"}, {"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."}, {"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."}, {"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."} ] m.add(messages, user_id="alice", metadata={"category": "movies"}) ``` ### Config Here are the parameters available for configuring Pinecone: | Parameter | Description | Default Value | | --- | --- | --- | | `collection_name` | Name of the index/collection | Required | | `embedding_model_dims` | Dimensions of the embedding model | Required | | `client` | Existing Pinecone client instance | `None` | | `api_key` | API key for Pinecone | Environment variable: `PINECONE_API_KEY` | | `environment` | Pinecone environment | `None` | | `serverless_config` | Configuration for serverless deployment | `None` | | `pod_config` | Configuration for pod-based deployment | `None` | | `hybrid_search` | Whether to enable hybrid search | `False` | | `metric` | Distance metric for vector similarity | `"cosine"` | | `batch_size` | Batch size for operations | `100` | #### Serverless Config Example ```python config = { "vector_store": { "provider": "pinecone", "config": { "collection_name": "memory_index", "embedding_model_dims": 1536, "serverless_config": { "cloud": "aws", "region": "us-west-2" } } } } ``` #### Pod Config Example ```python config = { "vector_store": { "provider": "pinecone", "config": { "collection_name": "memory_index", "embedding_model_dims": 1536, "pod_config": { "environment": "gcp-starter", "replicas": 1, "pod_type": "starter" } } } } ```