Integrate self-hosted Supabase with mem0 system

- Configure mem0 to use self-hosted Supabase instead of Qdrant for vector storage
- Update docker-compose to connect containers to localai network
- Install vecs library for Supabase pgvector integration
- Create comprehensive test suite for Supabase + mem0 integration
- Update documentation to reflect Supabase configuration
- All containers now connected to shared localai network
- Successful vector storage and retrieval tests completed

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Docker Config Backup
2025-07-31 06:57:10 +02:00
parent 724c553a2e
commit 41cd78207a
36 changed files with 2533 additions and 405 deletions

View File

@@ -0,0 +1,114 @@
---
title: What is Mem0?
icon: "brain"
iconType: "solid"
---
<Snippet file="async-memory-add.mdx" />
Mem0 is a memory layer designed for modern AI agents. It acts as a persistent memory layer that agents can use to:
- Recall relevant past interactions
- Store important user preferences and factual context
- Learn from successes and failures
It gives AI agents memory so they can remember, learn, and evolve across interactions. Mem0 integrates easily into your agent stack and scales from prototypes to production systems.
## Stateless vs. Stateful Agents
Most current agents are stateless: they process a query, generate a response, and forget everything. Even with huge context windows, everything resets the next session.
Stateful agents, powered by Mem0, are different. They retain context, recall what matters, and behave more intelligently over time.
<Frame>
<img src="../images/stateless-vs-stateful-agent.png" />
</Frame>
## Where Memory Fits in the Agent Stack
Mem0 sits alongside your retriever, planner, and LLM. Unlike retrieval-based systems (like RAG), Mem0 tracks past interactions, stores long-term knowledge, and evolves the agents behavior.
<Frame>
<img src="../images/memory-agent-stack.png" />
</Frame>
Memory is not about pushing more tokens into a prompt but about intelligently remembering context that matters. This distinction matters:
| Capability | Context Window | Mem0 Memory |
|------------------|------------------------|-----------------------------|
| Retention | Temporary | Persistent |
| Cost | Grows with input size | Optimized (only what matters) |
| Recall | Token proximity | Relevance + intent-based |
| Personalization | None | Deep, evolving profile |
| Behavior | Reactive | Adaptive |
## Memory vs. RAG: Complementary Tools
RAG (Retrieval-Augmented Generation) is great for fetching facts from documents. But its stateless. It doesnt know who the user is, what theyve asked before, or what failed last time.
Mem0 provides continuity. It stores decisions, preferences, and context—not just knowledge.
| Aspect | RAG | Mem0 Memory |
|--------------------|-------------------------------|-------------------------------|
| Statefulness | Stateless | Stateful |
| Recall Type | Document lookup | Evolving user context |
| Use Case | Ground answers in data | Guide behavior across time |
Together, theyre stronger: RAG informs the LLM; Mem0 shapes its memory.
## Types of Memory in Mem0
Mem0 supports different kinds of memory to mimic how humans store information:
- **Working Memory**: short-term session awareness
- **Factual Memory**: long-term structured knowledge (e.g., preferences, settings)
- **Episodic Memory**: records specific past conversations
- **Semantic Memory**: builds general knowledge over time
## Why Developers Choose Mem0
Mem0 isnt a wrapper around a vector store. Its a full memory engine with:
- **LLM-based extraction**: Intelligently decides what to remember
- **Filtering & decay**: Avoids memory bloat, forgets irrelevant info
- **Costs Reduction**: Save compute costs with smart prompt injection of only relevant memories
- **Dashboards & APIs**: Observability, fine-grained control
- **Cloud and OSS**: Use our platform version or our open-source SDK version
You plug Mem0 into your agent framework, it doesnt replace your LLM or workflows. Instead, it adds a smart memory layer on top.
## Core Capabilities
- **Reduced token usage and faster responses**: sub-50 ms lookups
- **Semantic memory**: procedural, episodic, and factual support
- **Multimodal support**: handle both text and images
- **Graph memory**: connect insights and entities across sessions
- **Host your way**: either a managed service or a self-hosted version
## Getting Started
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/overview).
<CardGroup cols={3}>
<Card title="Quickstart" icon="rocket" href="/quickstart">
Integrate Mem0 in a few lines of code
</Card>
<Card title="Playground" icon="play" href="https://app.mem0.ai/playground">
Mem0 in action
</Card>
<Card title="Examples" icon="lightbulb" href="/examples">
See what you can build with Mem0
</Card>
</CardGroup>
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx"/>