Files
t6_mem0_v2/test-synchronized-delete.py
Claude Code 1998bef6f4 Add MCP HTTP/SSE server and complete n8n integration
Major Changes:
- Implemented MCP HTTP/SSE transport server for n8n and web clients
- Created mcp_server/http_server.py with FastAPI for JSON-RPC 2.0 over HTTP
- Added health check endpoint (/health) for container monitoring
- Refactored mcp-server/ to mcp_server/ (Python module structure)
- Updated Dockerfile.mcp to run HTTP server with health checks

MCP Server Features:
- 7 memory tools exposed via MCP (add, search, get, update, delete)
- HTTP/SSE transport on port 8765 for n8n integration
- stdio transport for Claude Code integration
- JSON-RPC 2.0 protocol implementation
- CORS support for web clients

n8n Integration:
- Successfully tested with AI Agent workflows
- MCP Client Tool configuration documented
- Working webhook endpoint tested and verified
- System prompt optimized for automatic user_id usage

Documentation:
- Created comprehensive Mintlify documentation site
- Added docs/mcp/introduction.mdx - MCP server overview
- Added docs/mcp/installation.mdx - Installation guide
- Added docs/mcp/tools.mdx - Complete tool reference
- Added docs/examples/n8n.mdx - n8n integration guide
- Added docs/examples/claude-code.mdx - Claude Code setup
- Updated README.md with MCP HTTP server info
- Updated roadmap to mark Phase 1 as complete

Bug Fixes:
- Fixed synchronized delete operations across Supabase and Neo4j
- Updated memory_service.py with proper error handling
- Fixed Neo4j connection issues in delete operations

Configuration:
- Added MCP_HOST and MCP_PORT environment variables
- Updated .env.example with MCP server configuration
- Updated docker-compose.yml with MCP container health checks

Testing:
- Added test scripts for MCP HTTP endpoint verification
- Created test workflows in n8n
- Verified all 7 memory tools working correctly
- Tested synchronized operations across both stores

Version: 1.0.0
Status: Phase 1 Complete - Production Ready

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-15 13:56:41 +02:00

82 lines
2.8 KiB
Python

#!/usr/bin/env python3
"""Test synchronized deletion across Supabase and Neo4j"""
import asyncio
from mem0 import Memory
from config import mem0_config, settings
from memory_cleanup import MemoryCleanup
from neo4j import GraphDatabase
def check_store_counts(memory, driver):
"""Check counts in both stores"""
# Supabase
result = memory.get_all(user_id="sync_test_user")
supabase_count = len(result.get('results', []) if isinstance(result, dict) else result)
# Neo4j
with driver.session() as session:
result = session.run("MATCH (n {user_id: 'sync_test_user'}) RETURN count(n) as count")
neo4j_count = result.single()['count']
return supabase_count, neo4j_count
async def test_synchronized_delete():
"""Test that delete_all removes data from both stores"""
print("=" * 60)
print("Synchronized Deletion Test")
print("=" * 60)
# Initialize
memory = Memory.from_config(mem0_config)
cleanup = MemoryCleanup(memory)
driver = GraphDatabase.driver(
settings.neo4j_uri,
auth=(settings.neo4j_user, settings.neo4j_password)
)
# Step 1: Add test memories
print("\n[1/5] Adding test memories...")
memory.add(
messages=[
{"role": "user", "content": "I love Python and TypeScript"},
{"role": "assistant", "content": "Noted!"}
],
user_id="sync_test_user"
)
print("✓ Test memories added")
# Step 2: Check both stores (should have data)
print("\n[2/5] Checking stores BEFORE deletion...")
supabase_before, neo4j_before = check_store_counts(memory, driver)
print(f" Supabase: {supabase_before} memories")
print(f" Neo4j: {neo4j_before} nodes")
# Step 3: Perform synchronized deletion
print("\n[3/5] Performing synchronized deletion...")
result = cleanup.delete_all_synchronized(user_id="sync_test_user")
print(f"✓ Deletion result:")
print(f" Supabase: {'' if result['supabase_success'] else ''}")
print(f" Neo4j: {result['neo4j_nodes_deleted']} nodes deleted")
# Step 4: Check both stores (should be empty)
print("\n[4/5] Checking stores AFTER deletion...")
supabase_after, neo4j_after = check_store_counts(memory, driver)
print(f" Supabase: {supabase_after} memories")
print(f" Neo4j: {neo4j_after} nodes")
# Step 5: Cleanup
print("\n[5/5] Cleanup...")
cleanup.close()
driver.close()
print("✓ Test complete")
print("\n" + "=" * 60)
if supabase_after == 0 and neo4j_after == 0:
print("✅ SUCCESS: Both stores are empty - synchronized deletion works!")
else:
print(f"⚠️ WARNING: Stores not empty after deletion")
print(f" Supabase: {supabase_after}, Neo4j: {neo4j_after}")
print("=" * 60)
if __name__ == "__main__":
asyncio.run(test_synchronized_delete())