Docker Config Backup b74c7d79ca Fix double authentication and Matrix-Signal bridge issues
Authentication fixes:
- Removed auth.js from all HTML pages (was causing double prompts)
- Removed .htaccess and .htpasswd files (redundant with Caddy)
- Now using only Caddy basic auth: langmem / langmem2025

Signal bridge fixes:
- Found Signal bridge bot: @signalbot:matrix.klas.chat
- Created DM room between Claude user and bridge bot
- Sent login command to register Claude with Signal bridge
- Claude should now be able to bridge messages to Signal

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-17 14:06:13 +02:00

LangMem - Long-term Memory System for LLM Projects

A comprehensive memory system that integrates with your existing Ollama and Supabase infrastructure to provide long-term memory capabilities for LLM applications.

Architecture

LangMem uses a hybrid approach combining:

  • Vector Search: Supabase with pgvector for semantic similarity
  • Graph Relationships: Neo4j for contextual connections
  • Embeddings: Ollama with nomic-embed-text model
  • API Layer: FastAPI with async support

Features

  • 🧠 Hybrid Memory Retrieval: Vector + Graph search
  • 🔍 Semantic Search: Advanced similarity matching
  • 👥 Multi-user Support: Isolated user memories
  • 📊 Rich Metadata: Flexible memory attributes
  • 🔒 Secure API: Bearer token authentication
  • 🐳 Docker Ready: Containerized deployment
  • 📚 Protected Documentation: Basic auth-protected docs
  • 🧪 Comprehensive Tests: Unit and integration tests

Quick Start

Prerequisites

  • Docker and Docker Compose
  • Ollama running on localhost:11434
  • Supabase running on localai network
  • Python 3.11+ (for development)

1. Clone and Setup

git clone <repository>
cd langmem-project

2. Start Development Environment

./start-dev.sh

This will:

  • Create required Docker network
  • Start Neo4j database
  • Build and start the API
  • Run health checks

3. Test the API

./test.sh

API Endpoints

Authentication

All endpoints require Bearer token authentication:

Authorization: Bearer langmem_api_key_2025

Core Endpoints

Store Memory

POST /v1/memories/store
Content-Type: application/json

{
  "content": "Your memory content here",
  "user_id": "user123",
  "session_id": "session456",
  "metadata": {
    "category": "programming",
    "importance": "high"
  }
}

Search Memories

POST /v1/memories/search
Content-Type: application/json

{
  "query": "search query",
  "user_id": "user123",
  "limit": 10,
  "threshold": 0.7,
  "include_graph": true
}

Retrieve for Conversation

POST /v1/memories/retrieve
Content-Type: application/json

{
  "messages": [
    {"role": "user", "content": "Hello"},
    {"role": "assistant", "content": "Hi there!"}
  ],
  "user_id": "user123",
  "session_id": "session456"
}

Configuration

Environment Variables

Copy .env.example to .env and configure:

# API Settings
API_KEY=langmem_api_key_2025

# Ollama Configuration
OLLAMA_URL=http://localhost:11434

# Supabase Configuration
SUPABASE_URL=http://localhost:8000
SUPABASE_KEY=your_supabase_key
SUPABASE_DB_URL=postgresql://postgres:password@localhost:5435/postgres

# Neo4j Configuration
NEO4J_URL=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=langmem_neo4j_password

Development

Project Structure

langmem-project/
├── src/
│   └── api/
│       └── main.py          # Main API application
├── tests/
│   ├── test_api.py          # API unit tests
│   ├── test_integration.py  # Integration tests
│   └── conftest.py          # Test configuration
├── docker-compose.yml       # Docker services
├── Dockerfile              # API container
├── requirements.txt        # Python dependencies
├── start-dev.sh            # Development startup
├── test.sh                 # Test runner
└── README.md               # This file

Running Tests

# All tests
./test.sh all

# Unit tests only
./test.sh unit

# Integration tests only
./test.sh integration

# Quick tests (no slow tests)
./test.sh quick

# With coverage
./test.sh coverage

Local Development

# Install dependencies
pip install -r requirements.txt

# Run API directly
python src/api/main.py

# Run tests
pytest tests/ -v

Integration with Existing Infrastructure

Ollama Integration

  • Uses your existing Ollama instance on localhost:11434
  • Leverages nomic-embed-text for embeddings
  • Supports any Ollama model for embedding generation

Supabase Integration

  • Connects to your existing Supabase instance
  • Uses pgvector extension for vector storage
  • Leverages existing authentication and database

Docker Network

  • Connects to your existing localai network
  • Seamlessly integrates with other services
  • Maintains network isolation and security

API Documentation

Once running, visit:

Monitoring

Health Checks

The API provides comprehensive health monitoring:

curl http://localhost:8765/health

Returns status for:

  • Overall API health
  • Ollama connectivity
  • Supabase connection
  • Neo4j database
  • PostgreSQL database

Logs

View service logs:

# API logs
docker-compose logs -f langmem-api

# Neo4j logs
docker-compose logs -f langmem-neo4j

# All services
docker-compose logs -f

Troubleshooting

Common Issues

  1. API not starting: Check if Ollama and Supabase are running
  2. Database connection failed: Verify database credentials in .env
  3. Tests failing: Ensure all services are healthy before running tests
  4. Network issues: Confirm localai network exists and is accessible

Debug Commands

# Check service status
docker-compose ps

# Check network
docker network ls | grep localai

# Test Ollama
curl http://localhost:11434/api/tags

# Test Supabase
curl http://localhost:8000/health

# Check logs
docker-compose logs langmem-api

Production Deployment

For production deployment:

  1. Update environment variables
  2. Use proper secrets management
  3. Configure SSL/TLS
  4. Set up monitoring and logging
  5. Configure backup procedures

Documentation

The LangMem project includes comprehensive documentation with authentication protection.

Accessing Documentation

Start the authenticated documentation server:

# Start documentation server on port 8080 (default)
./start-docs-server.sh

# Or specify a custom port
./start-docs-server.sh 8090

Access Credentials:

  • Username: langmem
  • Password: langmem2025

Available Documentation:

  • 📖 Main Docs: System overview and features
  • 🏗️ Architecture: Detailed system architecture
  • 📡 API Reference: Complete API documentation
  • 🛠️ Implementation: Step-by-step setup guide

Direct Server Usage

You can also run the documentation server directly:

python3 docs_server.py [port]

Then visit: http://localhost:8080 (or your specified port)

Your browser will prompt for authentication credentials.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Run the test suite
  6. Submit a pull request

License

MIT License - see LICENSE file for details

Description
LangMem fact-based AI memory system with mem0-inspired approach
Readme 237 KiB
Languages
Python 97.2%
Shell 2.6%
Dockerfile 0.2%