Support Custom Prompt for Memory Action Decision (#2371)
This commit is contained in:
169
docs/features/custom-fact-extraction-prompt.mdx
Normal file
169
docs/features/custom-fact-extraction-prompt.mdx
Normal file
@@ -0,0 +1,169 @@
|
||||
---
|
||||
title: Custom Fact Extraction Prompt
|
||||
description: 'Enhance your product experience by adding custom fact extraction prompt tailored to your needs'
|
||||
icon: "pencil"
|
||||
iconType: "solid"
|
||||
---
|
||||
|
||||
## Introduction to Custom Fact Extraction Prompt
|
||||
|
||||
Custom fact extraction prompt allow you to tailor the behavior of your Mem0 instance to specific use cases or domains.
|
||||
By defining it, you can control how information is extracted from the user's message.
|
||||
|
||||
To create an effective custom fact extraction prompt:
|
||||
1. Be specific about the information to extract.
|
||||
2. Provide few-shot examples to guide the LLM.
|
||||
3. Ensure examples follow the format shown below.
|
||||
|
||||
Example of a custom fact extraction prompt:
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
custom_fact_extraction_prompt = """
|
||||
Please only extract entities containing customer support information, order details, and user information.
|
||||
Here are some few shot examples:
|
||||
|
||||
Input: Hi.
|
||||
Output: {{"facts" : []}}
|
||||
|
||||
Input: The weather is nice today.
|
||||
Output: {{"facts" : []}}
|
||||
|
||||
Input: My order #12345 hasn't arrived yet.
|
||||
Output: {{"facts" : ["Order #12345 not received"]}}
|
||||
|
||||
Input: I'm John Doe, and I'd like to return the shoes I bought last week.
|
||||
Output: {{"facts" : ["Customer name: John Doe", "Wants to return shoes", "Purchase made last week"]}}
|
||||
|
||||
Input: I ordered a red shirt, size medium, but received a blue one instead.
|
||||
Output: {{"facts" : ["Ordered red shirt, size medium", "Received blue shirt instead"]}}
|
||||
|
||||
Return the facts and customer information in a json format as shown above.
|
||||
"""
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
const customPrompt = `
|
||||
Please only extract entities containing customer support information, order details, and user information.
|
||||
Here are some few shot examples:
|
||||
|
||||
Input: Hi.
|
||||
Output: {"facts" : []}
|
||||
|
||||
Input: The weather is nice today.
|
||||
Output: {"facts" : []}
|
||||
|
||||
Input: My order #12345 hasn't arrived yet.
|
||||
Output: {"facts" : ["Order #12345 not received"]}
|
||||
|
||||
Input: I am John Doe, and I would like to return the shoes I bought last week.
|
||||
Output: {"facts" : ["Customer name: John Doe", "Wants to return shoes", "Purchase made last week"]}
|
||||
|
||||
Input: I ordered a red shirt, size medium, but received a blue one instead.
|
||||
Output: {"facts" : ["Ordered red shirt, size medium", "Received blue shirt instead"]}
|
||||
|
||||
Return the facts and customer information in a json format as shown above.
|
||||
`;
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
Here we initialize the custom fact extraction prompt in the config:
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
from mem0 import Memory
|
||||
|
||||
config = {
|
||||
"llm": {
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": "gpt-4o",
|
||||
"temperature": 0.2,
|
||||
"max_tokens": 2000,
|
||||
}
|
||||
},
|
||||
"custom_fact_extraction_prompt": custom_fact_extraction_prompt,
|
||||
"version": "v1.1"
|
||||
}
|
||||
|
||||
m = Memory.from_config(config_dict=config, user_id="alice")
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
import { Memory } from 'mem0ai/oss';
|
||||
|
||||
const config = {
|
||||
version: 'v1.1',
|
||||
llm: {
|
||||
provider: 'openai',
|
||||
config: {
|
||||
apiKey: process.env.OPENAI_API_KEY || '',
|
||||
model: 'gpt-4-turbo-preview',
|
||||
temperature: 0.2,
|
||||
maxTokens: 1500,
|
||||
},
|
||||
},
|
||||
customPrompt: customPrompt
|
||||
};
|
||||
|
||||
const memory = new Memory(config);
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
### Example 1
|
||||
|
||||
In this example, we are adding a memory of a user ordering a laptop. As seen in the output, the custom prompt is used to extract the relevant information from the user's message.
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
m.add("Yesterday, I ordered a laptop, the order id is 12345", user_id="alice")
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
await memory.add('Yesterday, I ordered a laptop, the order id is 12345', { userId: "user123" });
|
||||
```
|
||||
|
||||
```json Output
|
||||
{
|
||||
"results": [
|
||||
{
|
||||
"memory": "Ordered a laptop",
|
||||
"event": "ADD"
|
||||
},
|
||||
{
|
||||
"memory": "Order ID: 12345",
|
||||
"event": "ADD"
|
||||
},
|
||||
{
|
||||
"memory": "Order placed yesterday",
|
||||
"event": "ADD"
|
||||
}
|
||||
],
|
||||
"relations": []
|
||||
}
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
### Example 2
|
||||
|
||||
In this example, we are adding a memory of a user liking to go on hikes. This add message is not specific to the use-case mentioned in the custom prompt.
|
||||
Hence, the memory is not added.
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
m.add("I like going to hikes", user_id="alice")
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
await memory.add('I like going to hikes', { userId: "user123" });
|
||||
```
|
||||
|
||||
```json Output
|
||||
{
|
||||
"results": [],
|
||||
"relations": []
|
||||
}
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
The custom fact extraction prompt will process both the user and assistant messages to extract relevant information according to the defined format.
|
||||
Reference in New Issue
Block a user