# LangMem - Long-term Memory System for LLM Projects A comprehensive memory system that integrates with your existing Ollama and Supabase infrastructure to provide long-term memory capabilities for LLM applications. ## Architecture LangMem uses a hybrid approach combining: - **Vector Search**: Supabase with pgvector for semantic similarity - **Graph Relationships**: Neo4j for contextual connections - **Embeddings**: Ollama with nomic-embed-text model - **API Layer**: FastAPI with async support ## Features - ๐Ÿง  **Hybrid Memory Retrieval**: Vector + Graph search - ๐Ÿ” **Semantic Search**: Advanced similarity matching - ๐Ÿ‘ฅ **Multi-user Support**: Isolated user memories - ๐Ÿ“Š **Rich Metadata**: Flexible memory attributes - ๐Ÿ”’ **Secure API**: Bearer token authentication - ๐Ÿณ **Docker Ready**: Containerized deployment - ๐Ÿ“š **Protected Documentation**: Basic auth-protected docs - ๐Ÿงช **Comprehensive Tests**: Unit and integration tests ## Quick Start ### Prerequisites - Docker and Docker Compose - Ollama running on localhost:11434 - Supabase running on localai network - Python 3.11+ (for development) ### 1. Clone and Setup ```bash git clone cd langmem-project ``` ### 2. Start Development Environment ```bash ./start-dev.sh ``` This will: - Create required Docker network - Start Neo4j database - Build and start the API - Run health checks ### 3. Test the API ```bash ./test.sh ``` ## API Endpoints ### Authentication All endpoints require Bearer token authentication: ``` Authorization: Bearer langmem_api_key_2025 ``` ### Core Endpoints #### Store Memory ```bash POST /v1/memories/store Content-Type: application/json { "content": "Your memory content here", "user_id": "user123", "session_id": "session456", "metadata": { "category": "programming", "importance": "high" } } ``` #### Search Memories ```bash POST /v1/memories/search Content-Type: application/json { "query": "search query", "user_id": "user123", "limit": 10, "threshold": 0.7, "include_graph": true } ``` #### Retrieve for Conversation ```bash POST /v1/memories/retrieve Content-Type: application/json { "messages": [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi there!"} ], "user_id": "user123", "session_id": "session456" } ``` ## Configuration ### Environment Variables Copy `.env.example` to `.env` and configure: ```bash # API Settings API_KEY=langmem_api_key_2025 # Ollama Configuration OLLAMA_URL=http://localhost:11434 # Supabase Configuration SUPABASE_URL=http://localhost:8000 SUPABASE_KEY=your_supabase_key SUPABASE_DB_URL=postgresql://postgres:password@localhost:5435/postgres # Neo4j Configuration NEO4J_URL=bolt://localhost:7687 NEO4J_USER=neo4j NEO4J_PASSWORD=langmem_neo4j_password ``` ## Development ### Project Structure ``` langmem-project/ โ”œโ”€โ”€ src/ โ”‚ โ””โ”€โ”€ api/ โ”‚ โ””โ”€โ”€ main.py # Main API application โ”œโ”€โ”€ tests/ โ”‚ โ”œโ”€โ”€ test_api.py # API unit tests โ”‚ โ”œโ”€โ”€ test_integration.py # Integration tests โ”‚ โ””โ”€โ”€ conftest.py # Test configuration โ”œโ”€โ”€ docker-compose.yml # Docker services โ”œโ”€โ”€ Dockerfile # API container โ”œโ”€โ”€ requirements.txt # Python dependencies โ”œโ”€โ”€ start-dev.sh # Development startup โ”œโ”€โ”€ test.sh # Test runner โ””โ”€โ”€ README.md # This file ``` ### Running Tests ```bash # All tests ./test.sh all # Unit tests only ./test.sh unit # Integration tests only ./test.sh integration # Quick tests (no slow tests) ./test.sh quick # With coverage ./test.sh coverage ``` ### Local Development ```bash # Install dependencies pip install -r requirements.txt # Run API directly python src/api/main.py # Run tests pytest tests/ -v ``` ## Integration with Existing Infrastructure ### Ollama Integration - Uses your existing Ollama instance on localhost:11434 - Leverages nomic-embed-text for embeddings - Supports any Ollama model for embedding generation ### Supabase Integration - Connects to your existing Supabase instance - Uses pgvector extension for vector storage - Leverages existing authentication and database ### Docker Network - Connects to your existing `localai` network - Seamlessly integrates with other services - Maintains network isolation and security ## API Documentation Once running, visit: - API Documentation: http://localhost:8765/docs - Interactive API: http://localhost:8765/redoc - Health Check: http://localhost:8765/health ## Monitoring ### Health Checks The API provides comprehensive health monitoring: ```bash curl http://localhost:8765/health ``` Returns status for: - Overall API health - Ollama connectivity - Supabase connection - Neo4j database - PostgreSQL database ### Logs View service logs: ```bash # API logs docker-compose logs -f langmem-api # Neo4j logs docker-compose logs -f langmem-neo4j # All services docker-compose logs -f ``` ## Troubleshooting ### Common Issues 1. **API not starting**: Check if Ollama and Supabase are running 2. **Database connection failed**: Verify database credentials in .env 3. **Tests failing**: Ensure all services are healthy before running tests 4. **Network issues**: Confirm localai network exists and is accessible ### Debug Commands ```bash # Check service status docker-compose ps # Check network docker network ls | grep localai # Test Ollama curl http://localhost:11434/api/tags # Test Supabase curl http://localhost:8000/health # Check logs docker-compose logs langmem-api ``` ## Production Deployment For production deployment: 1. Update environment variables 2. Use proper secrets management 3. Configure SSL/TLS 4. Set up monitoring and logging 5. Configure backup procedures ## Documentation The LangMem project includes comprehensive documentation with authentication protection. ### Accessing Documentation Start the authenticated documentation server: ```bash # Start documentation server on port 8080 (default) ./start-docs-server.sh # Or specify a custom port ./start-docs-server.sh 8090 ``` **Access Credentials:** - **Username:** `langmem` - **Password:** `langmem2025` **Available Documentation:** - ๐Ÿ“– **Main Docs**: System overview and features - ๐Ÿ—๏ธ **Architecture**: Detailed system architecture - ๐Ÿ“ก **API Reference**: Complete API documentation - ๐Ÿ› ๏ธ **Implementation**: Step-by-step setup guide ### Direct Server Usage You can also run the documentation server directly: ```bash python3 docs_server.py [port] ``` Then visit: `http://localhost:8080` (or your specified port) Your browser will prompt for authentication credentials. ## Contributing 1. Fork the repository 2. Create a feature branch 3. Make your changes 4. Add tests 5. Run the test suite 6. Submit a pull request ## License MIT License - see LICENSE file for details