- Created caddyfile-docs-update.txt with Caddy basic auth configuration - Fixed documentation path from /home/klas/langmem-docs to /home/klas/langmem/docs - Added basic auth with credentials: langmem / langmem2025 - Created create-claude-matrix-user.md with user creation instructions - Added get-claude-token.py script for automated Matrix setup - Includes token retrieval, room joining, and configuration export Manual steps required: 1. Update /etc/caddy/Caddyfile with new docs.klas.chat config 2. Reload Caddy: sudo systemctl reload caddy 3. Create Claude Matrix user via admin panel 4. Run get-claude-token.py to complete setup 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
LangMem - Long-term Memory System for LLM Projects
A comprehensive memory system that integrates with your existing Ollama and Supabase infrastructure to provide long-term memory capabilities for LLM applications.
Architecture
LangMem uses a hybrid approach combining:
- Vector Search: Supabase with pgvector for semantic similarity
- Graph Relationships: Neo4j for contextual connections
- Embeddings: Ollama with nomic-embed-text model
- API Layer: FastAPI with async support
Features
- 🧠 Hybrid Memory Retrieval: Vector + Graph search
- 🔍 Semantic Search: Advanced similarity matching
- 👥 Multi-user Support: Isolated user memories
- 📊 Rich Metadata: Flexible memory attributes
- 🔒 Secure API: Bearer token authentication
- 🐳 Docker Ready: Containerized deployment
- 📚 Protected Documentation: Basic auth-protected docs
- 🧪 Comprehensive Tests: Unit and integration tests
Quick Start
Prerequisites
- Docker and Docker Compose
- Ollama running on localhost:11434
- Supabase running on localai network
- Python 3.11+ (for development)
1. Clone and Setup
git clone <repository>
cd langmem-project
2. Start Development Environment
./start-dev.sh
This will:
- Create required Docker network
- Start Neo4j database
- Build and start the API
- Run health checks
3. Test the API
./test.sh
API Endpoints
Authentication
All endpoints require Bearer token authentication:
Authorization: Bearer langmem_api_key_2025
Core Endpoints
Store Memory
POST /v1/memories/store
Content-Type: application/json
{
"content": "Your memory content here",
"user_id": "user123",
"session_id": "session456",
"metadata": {
"category": "programming",
"importance": "high"
}
}
Search Memories
POST /v1/memories/search
Content-Type: application/json
{
"query": "search query",
"user_id": "user123",
"limit": 10,
"threshold": 0.7,
"include_graph": true
}
Retrieve for Conversation
POST /v1/memories/retrieve
Content-Type: application/json
{
"messages": [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi there!"}
],
"user_id": "user123",
"session_id": "session456"
}
Configuration
Environment Variables
Copy .env.example to .env and configure:
# API Settings
API_KEY=langmem_api_key_2025
# Ollama Configuration
OLLAMA_URL=http://localhost:11434
# Supabase Configuration
SUPABASE_URL=http://localhost:8000
SUPABASE_KEY=your_supabase_key
SUPABASE_DB_URL=postgresql://postgres:password@localhost:5435/postgres
# Neo4j Configuration
NEO4J_URL=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=langmem_neo4j_password
Development
Project Structure
langmem-project/
├── src/
│ └── api/
│ └── main.py # Main API application
├── tests/
│ ├── test_api.py # API unit tests
│ ├── test_integration.py # Integration tests
│ └── conftest.py # Test configuration
├── docker-compose.yml # Docker services
├── Dockerfile # API container
├── requirements.txt # Python dependencies
├── start-dev.sh # Development startup
├── test.sh # Test runner
└── README.md # This file
Running Tests
# All tests
./test.sh all
# Unit tests only
./test.sh unit
# Integration tests only
./test.sh integration
# Quick tests (no slow tests)
./test.sh quick
# With coverage
./test.sh coverage
Local Development
# Install dependencies
pip install -r requirements.txt
# Run API directly
python src/api/main.py
# Run tests
pytest tests/ -v
Integration with Existing Infrastructure
Ollama Integration
- Uses your existing Ollama instance on localhost:11434
- Leverages nomic-embed-text for embeddings
- Supports any Ollama model for embedding generation
Supabase Integration
- Connects to your existing Supabase instance
- Uses pgvector extension for vector storage
- Leverages existing authentication and database
Docker Network
- Connects to your existing
localainetwork - Seamlessly integrates with other services
- Maintains network isolation and security
API Documentation
Once running, visit:
- API Documentation: http://localhost:8765/docs
- Interactive API: http://localhost:8765/redoc
- Health Check: http://localhost:8765/health
Monitoring
Health Checks
The API provides comprehensive health monitoring:
curl http://localhost:8765/health
Returns status for:
- Overall API health
- Ollama connectivity
- Supabase connection
- Neo4j database
- PostgreSQL database
Logs
View service logs:
# API logs
docker-compose logs -f langmem-api
# Neo4j logs
docker-compose logs -f langmem-neo4j
# All services
docker-compose logs -f
Troubleshooting
Common Issues
- API not starting: Check if Ollama and Supabase are running
- Database connection failed: Verify database credentials in .env
- Tests failing: Ensure all services are healthy before running tests
- Network issues: Confirm localai network exists and is accessible
Debug Commands
# Check service status
docker-compose ps
# Check network
docker network ls | grep localai
# Test Ollama
curl http://localhost:11434/api/tags
# Test Supabase
curl http://localhost:8000/health
# Check logs
docker-compose logs langmem-api
Production Deployment
For production deployment:
- Update environment variables
- Use proper secrets management
- Configure SSL/TLS
- Set up monitoring and logging
- Configure backup procedures
Documentation
The LangMem project includes comprehensive documentation with authentication protection.
Accessing Documentation
Start the authenticated documentation server:
# Start documentation server on port 8080 (default)
./start-docs-server.sh
# Or specify a custom port
./start-docs-server.sh 8090
Access Credentials:
- Username:
langmem - Password:
langmem2025
Available Documentation:
- 📖 Main Docs: System overview and features
- 🏗️ Architecture: Detailed system architecture
- 📡 API Reference: Complete API documentation
- 🛠️ Implementation: Step-by-step setup guide
Direct Server Usage
You can also run the documentation server directly:
python3 docs_server.py [port]
Then visit: http://localhost:8080 (or your specified port)
Your browser will prompt for authentication credentials.
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Run the test suite
- Submit a pull request
License
MIT License - see LICENSE file for details