Add Claude N8N toolkit with Docker mock API server
- Added comprehensive N8N development tools collection - Added Docker-containerized mock API server for testing - Added complete documentation and setup guides - Added mock API server with health checks and data endpoints - Tools include workflow analyzers, debuggers, and controllers 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
41
claude_n8n/.gitignore
vendored
Normal file
41
claude_n8n/.gitignore
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
# Python cache files
|
||||
__pycache__/
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
.Python
|
||||
|
||||
# Virtual environments
|
||||
venv/
|
||||
env/
|
||||
ENV/
|
||||
|
||||
# IDE files
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# System files
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
logs/
|
||||
|
||||
# Docker
|
||||
.docker/
|
||||
|
||||
# Temporary files
|
||||
*.tmp
|
||||
*.bak
|
||||
|
||||
# Environment files
|
||||
.env
|
||||
.env.local
|
||||
|
||||
# Personal credentials (but keep examples)
|
||||
**/config/*
|
||||
!**/config/*.example
|
||||
!**/config/*.sample
|
||||
@@ -0,0 +1,342 @@
|
||||
# Automating N8N Workflow Development with Claude Code CLI
|
||||
|
||||
Comprehensive integration patterns and practical implementation approaches for self-hosted n8n instances enable sophisticated workflow automation through Claude's AI-powered CLI. **This research reveals proven architecture patterns, API strategies, and development workflows that transform manual n8n workflow creation into automated, iterative development processes**. The integration combines n8n's 400+ node ecosystem with Claude's natural language processing capabilities, creating powerful automation development workflows suitable for enterprise environments.
|
||||
|
||||
The convergence of n8n's REST API capabilities with Claude Code CLI's intelligent automation presents unprecedented opportunities for workflow development at scale. Self-hosted n8n instances provide the control and customization needed for complex development scenarios, while Claude Code CLI offers the intelligence to interpret requirements and generate sophisticated automation solutions.
|
||||
|
||||
## N8N REST API integration for self-hosted development
|
||||
|
||||
**API architecture and authentication**
|
||||
|
||||
Self-hosted n8n instances expose a comprehensive REST API exclusively for paid plans, accessible through the built-in Swagger UI at `http(s)://{base_url}:{port}/rest/swagger`. **API key authentication provides secure programmatic access** through the `X-N8N-API-KEY` header, with keys generated via the web UI at Settings > n8n API. Enterprise instances support scoped API keys for granular resource access, while standard keys provide full account permissions.
|
||||
|
||||
```bash
|
||||
# Core workflow management endpoints
|
||||
POST /rest/workflows # Create new workflows
|
||||
PUT /rest/workflows/{id} # Update existing workflows
|
||||
GET /rest/workflows # List all workflows
|
||||
GET /rest/workflows/{id} # Get specific workflow
|
||||
POST /rest/workflows/{id}/execute # Execute with test data
|
||||
```
|
||||
|
||||
**Workflow definition structure and manipulation**
|
||||
|
||||
N8N workflows utilize a standardized JSON format optimized for programmatic manipulation. **Each workflow contains nodes, connections, settings, and metadata in a predictable structure** that enables automated generation and modification. The format supports dynamic parameter setting through expressions (`{{ $json.fieldName }}`), credential references, and environment-specific configurations.
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "API Generated Workflow",
|
||||
"nodes": [
|
||||
{
|
||||
"parameters": {
|
||||
"url": "{{ $json.dynamicEndpoint }}",
|
||||
"authentication": "predefinedCredentialType"
|
||||
},
|
||||
"name": "HTTP Request",
|
||||
"type": "n8n-nodes-base.httpRequest",
|
||||
"typeVersion": 1,
|
||||
"position": [460, 300],
|
||||
"id": "unique-node-id"
|
||||
}
|
||||
],
|
||||
"connections": {
|
||||
"StartNode": {
|
||||
"main": [[{
|
||||
"node": "HTTP Request",
|
||||
"type": "main",
|
||||
"index": 0
|
||||
}]]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Execution monitoring and error handling**
|
||||
|
||||
The API provides comprehensive execution management through `/rest/executions` endpoints, returning detailed execution data including node-level results, error information, and performance metrics. **Execution statuses include waiting, running, succeeded, cancelled, and failed**, with full error stack traces and debugging information available for failed workflows.
|
||||
|
||||
## Claude Code CLI integration patterns for workflow automation
|
||||
|
||||
**Natural language to workflow translation**
|
||||
|
||||
Claude Code CLI transforms natural language requirements into functional n8n workflows through its advanced code generation capabilities. **The CLI supports both interactive conversational mode and headless automation** via `claude -p "prompt"` commands, enabling integration into larger automation pipelines. Built-in tools including WebFetchTool and BatchTool provide native capabilities for API interaction and parallel processing.
|
||||
|
||||
```bash
|
||||
# Generate n8n workflow from requirements
|
||||
claude -p "Create an n8n workflow that monitors GitHub issues,
|
||||
processes them through OpenAI for sentiment analysis, and
|
||||
sends results to Slack with proper error handling"
|
||||
|
||||
# Iterative refinement
|
||||
claude -p "Optimize the workflow for rate limiting and add
|
||||
retry logic for failed API calls"
|
||||
```
|
||||
|
||||
**Template-driven development workflow**
|
||||
|
||||
Claude Code CLI excels at template-based workflow generation, creating reusable patterns for common automation scenarios. **Configuration files enable project-specific workflow templates** with customizable parameters, output paths, and dependency management.
|
||||
|
||||
```javascript
|
||||
// .claude/config.json
|
||||
{
|
||||
"workflows": {
|
||||
"api-integration": {
|
||||
"templatePath": "./templates/n8n-api-workflow.json",
|
||||
"outputPath": "workflows/{name}-integration.json"
|
||||
},
|
||||
"monitoring": {
|
||||
"templatePath": "./templates/n8n-monitoring.json",
|
||||
"outputPath": "workflows/{name}-monitor.json"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Iterative development and testing integration**
|
||||
|
||||
The CLI supports sophisticated iterative development workflows through test-driven development patterns and continuous improvement loops. **Claude can analyze workflow execution results, identify optimization opportunities, and generate improved versions** based on performance data and error patterns.
|
||||
|
||||
## Programmatic workflow development and automation approaches
|
||||
|
||||
**API-driven workflow lifecycle management**
|
||||
|
||||
Automated workflow development encompasses the complete lifecycle from initial creation through deployment and monitoring. **The n8n API enables programmatic workflow creation, parameter modification, and execution management** while maintaining version control integration and environment-specific configurations.
|
||||
|
||||
```python
|
||||
# Workflow automation example
|
||||
class N8NWorkflowAutomator:
|
||||
def __init__(self, api_key, base_url):
|
||||
self.api_key = api_key
|
||||
self.base_url = base_url
|
||||
|
||||
def create_workflow_from_template(self, template, parameters):
|
||||
# Generate workflow JSON with dynamic parameters
|
||||
workflow_json = self.process_template(template, parameters)
|
||||
|
||||
# Create via API
|
||||
response = self.api_call('POST', '/workflows', workflow_json)
|
||||
return response['id']
|
||||
|
||||
def test_and_refine(self, workflow_id, test_data):
|
||||
# Execute with test data
|
||||
execution = self.execute_workflow(workflow_id, test_data)
|
||||
|
||||
# Analyze results and suggest improvements
|
||||
if execution['status'] == 'failed':
|
||||
return self.generate_improvements(execution['error'])
|
||||
```
|
||||
|
||||
**Testing methodologies and frameworks**
|
||||
|
||||
N8N supports comprehensive testing approaches including unit testing for individual nodes, integration testing for complete workflows, and data replay capabilities for debugging. **The platform's "Test Step" functionality combined with pinned data enables repeatable testing** without triggering external services.
|
||||
|
||||
Key testing strategies include:
|
||||
- **Node-level validation** with mock data inputs
|
||||
- **Data flow verification** between connected nodes
|
||||
- **Error scenario testing** with invalid inputs
|
||||
- **Performance testing** under various load conditions
|
||||
- **Integration testing** with external services
|
||||
|
||||
**CI/CD integration patterns**
|
||||
|
||||
Modern workflow development requires robust CI/CD integration supporting automated testing, deployment, and monitoring. **N8N's API-first architecture enables seamless integration with GitHub Actions, Jenkins, and other CI/CD platforms** for automated workflow lifecycle management.
|
||||
|
||||
```yaml
|
||||
# GitHub Actions workflow deployment
|
||||
name: N8N Workflow Deployment
|
||||
on:
|
||||
push:
|
||||
paths: ['workflows/*.json']
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Deploy workflows
|
||||
run: |
|
||||
for workflow in workflows/*.json; do
|
||||
curl -X POST ${{ secrets.N8N_URL }}/api/v1/workflows \
|
||||
-H "X-N8N-API-KEY: ${{ secrets.N8N_API_KEY }}" \
|
||||
-d @$workflow
|
||||
done
|
||||
```
|
||||
|
||||
## Self-hosted configuration for development automation
|
||||
|
||||
**Docker-based development environment**
|
||||
|
||||
Self-hosted n8n instances provide complete control over the development environment, enabling custom configurations, security policies, and integration patterns. **PostgreSQL database backend is recommended for development** due to superior performance and concurrent access support compared to SQLite.
|
||||
|
||||
```yaml
|
||||
# Production-ready Docker Compose configuration
|
||||
version: '3.8'
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16
|
||||
environment:
|
||||
POSTGRES_DB: n8n_dev
|
||||
POSTGRES_USER: n8n_user
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ['CMD-SHELL', 'pg_isready -U n8n_user -d n8n_dev']
|
||||
|
||||
n8n:
|
||||
image: docker.n8n.io/n8nio/n8n
|
||||
environment:
|
||||
- DB_TYPE=postgresdb
|
||||
- DB_POSTGRESDB_HOST=postgres
|
||||
- DB_POSTGRESDB_DATABASE=n8n_dev
|
||||
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
|
||||
- EXECUTIONS_MODE=queue
|
||||
- QUEUE_BULL_REDIS_HOST=redis
|
||||
ports:
|
||||
- "5678:5678"
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
```
|
||||
|
||||
**Environment separation and security**
|
||||
|
||||
Development automation requires proper environment isolation with separate configurations for development, staging, and production instances. **Environment-specific credentials, database schemas, and security policies** prevent cross-contamination and enable safe testing procedures.
|
||||
|
||||
Essential security configurations include:
|
||||
- **Strong encryption keys** for credential storage
|
||||
- **Environment variable management** for sensitive data
|
||||
- **CORS configuration** appropriate for each environment
|
||||
- **API access controls** with scoped permissions
|
||||
- **Network isolation** between environments
|
||||
|
||||
**Performance optimization and scaling**
|
||||
|
||||
Self-hosted instances support queue mode with Redis-backed job processing for improved performance and scalability. **Worker instances can be scaled horizontally** to handle increased workflow execution demands while maintaining response time targets.
|
||||
|
||||
## Integration architecture and automation patterns
|
||||
|
||||
**Feedback loop implementation for iterative improvement**
|
||||
|
||||
Successful workflow automation requires continuous improvement through data-driven feedback loops. **Claude Code CLI can analyze execution logs, performance metrics, and error patterns** to suggest optimizations and generate improved workflow versions automatically.
|
||||
|
||||
The four-stage feedback process includes:
|
||||
1. **Data Collection**: Execution metrics, performance data, error logs
|
||||
2. **Analysis**: Pattern recognition and bottleneck identification
|
||||
3. **Decision Making**: Optimization strategies based on analysis
|
||||
4. **Implementation**: Automated workflow updates and testing
|
||||
|
||||
**Test data management and execution analysis**
|
||||
|
||||
Comprehensive test data management ensures reliable workflow development and validation. **Synthetic data generation, production data masking, and automated cleanup processes** provide consistent testing environments while maintaining data privacy and compliance requirements.
|
||||
|
||||
Advanced analysis capabilities include:
|
||||
- **Automated result correlation** with historical trends
|
||||
- **Performance benchmarking** across workflow versions
|
||||
- **Anomaly detection** for unusual execution patterns
|
||||
- **Predictive failure identification** based on execution history
|
||||
|
||||
**Monitoring and observability integration**
|
||||
|
||||
Production-ready workflow automation requires comprehensive monitoring spanning metrics, logs, and traces. **N8N provides built-in health check endpoints** (`/healthz`, `/metrics`) that integrate with monitoring platforms like Prometheus and Grafana for complete observability.
|
||||
|
||||
## Practical implementation examples and code samples
|
||||
|
||||
**Complete workflow automation pipeline**
|
||||
|
||||
This example demonstrates end-to-end workflow automation combining Claude Code CLI with n8n API integration:
|
||||
|
||||
```bash
|
||||
# 1. Generate workflow from natural language requirements
|
||||
claude -p "Create an n8n workflow that:
|
||||
- Monitors RSS feeds every 15 minutes
|
||||
- Filters articles by keywords using AI
|
||||
- Posts filtered articles to Slack with sentiment scores
|
||||
- Logs all activities to PostgreSQL database
|
||||
- Handles rate limiting and error recovery"
|
||||
|
||||
# 2. Test and refine the generated workflow
|
||||
claude -p "Test the RSS monitoring workflow with sample data
|
||||
and optimize for performance and error handling"
|
||||
|
||||
# 3. Deploy to development environment
|
||||
curl -X POST http://localhost:5678/api/v1/workflows \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "X-N8N-API-KEY: $DEV_API_KEY" \
|
||||
-d @generated-workflow.json
|
||||
|
||||
# 4. Monitor and analyze execution results
|
||||
claude -p "Analyze the workflow execution logs and suggest
|
||||
improvements for better reliability and performance"
|
||||
```
|
||||
|
||||
**Advanced integration architecture**
|
||||
|
||||
The following architecture pattern enables sophisticated workflow development automation:
|
||||
|
||||
```javascript
|
||||
class N8NClaudeIntegration {
|
||||
constructor(claudeConfig, n8nConfig) {
|
||||
this.claude = new ClaudeCodeClient(claudeConfig);
|
||||
this.n8n = new N8NAPIClient(n8nConfig);
|
||||
this.feedbackLoop = new IterativeFeedbackLoop();
|
||||
}
|
||||
|
||||
async createWorkflowFromRequirements(requirements) {
|
||||
// Generate initial workflow using Claude
|
||||
const workflowJSON = await this.claude.generateWorkflow(requirements);
|
||||
|
||||
// Deploy to development environment
|
||||
const workflowId = await this.n8n.createWorkflow(workflowJSON);
|
||||
|
||||
// Test with sample data
|
||||
const results = await this.testWorkflow(workflowId);
|
||||
|
||||
// Iterate based on results
|
||||
return await this.refineWorkflow(workflowId, results);
|
||||
}
|
||||
|
||||
async refineWorkflow(workflowId, executionResults) {
|
||||
const analysis = await this.analyzeResults(executionResults);
|
||||
|
||||
if (analysis.needsImprovement) {
|
||||
const improvements = await this.claude.suggestImprovements(analysis);
|
||||
const updatedWorkflow = await this.claude.applyImprovements(improvements);
|
||||
await this.n8n.updateWorkflow(workflowId, updatedWorkflow);
|
||||
}
|
||||
|
||||
return workflowId;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation recommendations and best practices
|
||||
|
||||
**Development workflow optimization**
|
||||
|
||||
Successful automation requires careful attention to development practices and architectural patterns. **Start with simple CLI-API integration patterns before implementing complex workflows**, establishing monitoring infrastructure from the beginning and designing for failure with comprehensive error handling.
|
||||
|
||||
Key recommendations include:
|
||||
- **Modular workflow design** with reusable components
|
||||
- **Comprehensive testing** at node, workflow, and integration levels
|
||||
- **Version control integration** for workflow lifecycle management
|
||||
- **Environment-specific configurations** for reliable deployments
|
||||
- **Continuous monitoring** with automated alerting and response
|
||||
|
||||
**Security and compliance considerations**
|
||||
|
||||
Enterprise workflow automation demands robust security practices throughout the development lifecycle. **Never hardcode sensitive credentials in workflows**, utilizing environment variables, external secret management systems, and proper access controls for all integrations.
|
||||
|
||||
Essential security practices:
|
||||
- **Strong encryption keys** for n8n credential storage
|
||||
- **API key rotation** and access management
|
||||
- **Network isolation** between development and production
|
||||
- **Audit logging** for all workflow modifications
|
||||
- **Compliance validation** for sensitive data processing
|
||||
|
||||
## Conclusion
|
||||
|
||||
The integration of Claude Code CLI with self-hosted n8n instances creates a powerful platform for automated workflow development that scales from simple automation tasks to complex enterprise integration scenarios. **This combination leverages AI-powered natural language processing with robust workflow execution capabilities**, enabling developers to create, test, and deploy sophisticated automation solutions through conversational interfaces.
|
||||
|
||||
The architecture patterns and implementation strategies outlined in this research provide a comprehensive foundation for building production-ready workflow automation systems. **Success depends on proper environment configuration, iterative development practices, and comprehensive monitoring**, with emphasis on security, scalability, and maintainability throughout the development lifecycle.
|
||||
|
||||
By following these proven patterns and leveraging the documented API capabilities, development teams can achieve significant productivity improvements while maintaining the flexibility and control required for complex automation scenarios. The future of workflow development lies in this intelligent automation approach that combines human creativity with AI-powered implementation capabilities.
|
||||
14
claude_n8n/idea.md
Normal file
14
claude_n8n/idea.md
Normal file
@@ -0,0 +1,14 @@
|
||||
- This is the basic idea of the tool for Claude Code which should help me with N8N flow development
|
||||
- Claude should remember it for future use (also in anoither instances)
|
||||
- there are research document and API credential files in this folder
|
||||
- Basic idea of the workflow is:
|
||||
1. i tell Claude what to do in the n8n flow. If there is some bug or i need to add new functionality
|
||||
2. Claude will download the flow via API.
|
||||
3. Claude vill analyze the flow and logs of the output of the flow, errors and so on.
|
||||
4. Claude change something in the flow and upload it back via API.
|
||||
5. One of the tools should be the REST API that will return mock data for the flow. In every run there will be slightly different data. returned data should be stored ion text file so i can edit them manualy.
|
||||
6. Claude stoip executing of every other flows
|
||||
7. Claude start the newly edited flow. It should be setted up as Inactive and there should be manual trigger (study n8n documentation for how to do it. maybe use some webhook node or so8
|
||||
8. Claude analyze logs (need to find in the documentation how to do it) - best is mabe to listen to n8ndocker logs via docker compose
|
||||
- if it is OK Claude tell me.
|
||||
- If there is some errors Claude should repeat from the step 2.
|
||||
41
claude_n8n/tools/__init__.py
Normal file
41
claude_n8n/tools/__init__.py
Normal file
@@ -0,0 +1,41 @@
|
||||
"""
|
||||
N8N Workflow Development Tools
|
||||
|
||||
A comprehensive toolkit for automated N8N workflow development, testing, and improvement.
|
||||
"""
|
||||
|
||||
from .n8n_client import N8NClient, N8NConfig
|
||||
from .workflow_analyzer import WorkflowAnalyzer, AnalysisResult
|
||||
from .execution_monitor import ExecutionMonitor, ExecutionEvent, ExecutionStatus, create_simple_monitor
|
||||
from .workflow_improver import WorkflowImprover, TestCase, ImprovementResult
|
||||
from .n8n_assistant import N8NAssistant
|
||||
from .mock_api_server import MockAPIServer, create_mock_api_server
|
||||
from .workflow_controller import WorkflowController, create_workflow_controller
|
||||
from .docker_log_monitor import DockerLogMonitor, create_docker_log_monitor
|
||||
from .manual_trigger_manager import ManualTriggerManager, create_manual_trigger_manager
|
||||
|
||||
__version__ = "1.0.0"
|
||||
__author__ = "Claude Code CLI"
|
||||
|
||||
__all__ = [
|
||||
"N8NClient",
|
||||
"N8NConfig",
|
||||
"WorkflowAnalyzer",
|
||||
"AnalysisResult",
|
||||
"ExecutionMonitor",
|
||||
"ExecutionEvent",
|
||||
"ExecutionStatus",
|
||||
"create_simple_monitor",
|
||||
"WorkflowImprover",
|
||||
"TestCase",
|
||||
"ImprovementResult",
|
||||
"N8NAssistant",
|
||||
"MockAPIServer",
|
||||
"create_mock_api_server",
|
||||
"WorkflowController",
|
||||
"create_workflow_controller",
|
||||
"DockerLogMonitor",
|
||||
"create_docker_log_monitor",
|
||||
"ManualTriggerManager",
|
||||
"create_manual_trigger_manager"
|
||||
]
|
||||
98
claude_n8n/tools/api_data/matrix_messages.json
Normal file
98
claude_n8n/tools/api_data/matrix_messages.json
Normal file
@@ -0,0 +1,98 @@
|
||||
[
|
||||
{
|
||||
"chunk": [
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@klas:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "The hybrid deduplication system is now working perfectly. We've successfully implemented content-based analysis that eliminates dependency on N8N workflow variables.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017000000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 1000
|
||||
},
|
||||
"event_id": "$memory_1_recent_implementation_success",
|
||||
"user_id": "@klas:matrix.klas.chat",
|
||||
"age": 1000
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@developer:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Key improvements include age-based filtering (30+ minutes), system message detection, and enhanced duplicate detection using content fingerprinting. The solution addresses the core issue where 10-message chunks were being reprocessed.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017060000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 2000
|
||||
},
|
||||
"event_id": "$memory_2_technical_details",
|
||||
"user_id": "@developer:matrix.klas.chat",
|
||||
"age": 2000
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@ai_assistant:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Memory retention has been significantly improved. The false duplicate detection that was causing 0.2-minute memory lifespans has been resolved through sophisticated content analysis and multiple validation layers.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017120000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 3000
|
||||
},
|
||||
"event_id": "$memory_3_retention_improvement",
|
||||
"user_id": "@ai_assistant:matrix.klas.chat",
|
||||
"age": 3000
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@system_monitor:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Test results: 2/2 scenarios passed. Valid recent messages are processed correctly, while old messages (1106+ minutes) are properly filtered. The enhanced deduplication is fully operational with robust duplicate detection.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017180000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 4000
|
||||
},
|
||||
"event_id": "$memory_4_test_results",
|
||||
"user_id": "@system_monitor:matrix.klas.chat",
|
||||
"age": 4000
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@project_lead:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Next phase: Monitor memory creation and consolidation patterns. The hybrid solution combines deterministic deduplication with AI-driven memory management for optimal performance and accuracy.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017240000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 5000
|
||||
},
|
||||
"event_id": "$memory_5_next_phase",
|
||||
"user_id": "@project_lead:matrix.klas.chat",
|
||||
"age": 5000
|
||||
}
|
||||
],
|
||||
"start": "t500-17000_0_0_0_0_0_0_0_0_0",
|
||||
"end": "t505-17005_0_0_0_0_0_0_0_0_0"
|
||||
}
|
||||
]
|
||||
18
claude_n8n/tools/api_data/test_data.json
Normal file
18
claude_n8n/tools/api_data/test_data.json
Normal file
@@ -0,0 +1,18 @@
|
||||
{
|
||||
"message": "Hello from mock API",
|
||||
"timestamp": 1749928362092,
|
||||
"items": [
|
||||
{
|
||||
"id": 1,
|
||||
"name": "Item 1"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"name": "Item 2"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"name": "Item 3"
|
||||
}
|
||||
]
|
||||
}
|
||||
426
claude_n8n/tools/docker_log_monitor.py
Normal file
426
claude_n8n/tools/docker_log_monitor.py
Normal file
@@ -0,0 +1,426 @@
|
||||
"""
|
||||
Docker Log Monitor for N8N
|
||||
|
||||
This module provides functionality to monitor N8N Docker container logs
|
||||
in real-time to catch errors and analyze workflow execution.
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import threading
|
||||
import json
|
||||
import re
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Any, Optional, Callable
|
||||
from pathlib import Path
|
||||
import time
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class DockerLogMonitor:
|
||||
"""Monitor N8N Docker container logs for error detection and analysis."""
|
||||
|
||||
def __init__(self, container_name: str = "n8n", compose_file: Optional[str] = None):
|
||||
"""
|
||||
Initialize the Docker log monitor.
|
||||
|
||||
Args:
|
||||
container_name: Name of the N8N Docker container
|
||||
compose_file: Path to docker-compose file (optional)
|
||||
"""
|
||||
self.container_name = container_name
|
||||
self.compose_file = compose_file
|
||||
self.is_monitoring = False
|
||||
self.monitor_thread = None
|
||||
self.log_callbacks = []
|
||||
self.error_patterns = [
|
||||
r"ERROR",
|
||||
r"FATAL",
|
||||
r"Exception",
|
||||
r"Error:",
|
||||
r"Failed",
|
||||
r"Workflow execution.*failed",
|
||||
r"Node.*failed",
|
||||
r"Timeout",
|
||||
r"Connection.*failed",
|
||||
r"Authentication.*failed"
|
||||
]
|
||||
self.log_buffer = []
|
||||
self.max_buffer_size = 1000
|
||||
|
||||
def add_log_callback(self, callback: Callable[[Dict[str, Any]], None]):
|
||||
"""
|
||||
Add a callback function to be called when new logs are received.
|
||||
|
||||
Args:
|
||||
callback: Function to call with log entry data
|
||||
"""
|
||||
self.log_callbacks.append(callback)
|
||||
|
||||
def start_monitoring(self, tail_lines: int = 100, follow: bool = True) -> bool:
|
||||
"""
|
||||
Start monitoring Docker logs.
|
||||
|
||||
Args:
|
||||
tail_lines: Number of existing log lines to retrieve
|
||||
follow: Whether to follow new logs in real-time
|
||||
|
||||
Returns:
|
||||
True if monitoring started successfully
|
||||
"""
|
||||
if self.is_monitoring:
|
||||
logger.warning("Log monitoring is already running")
|
||||
return False
|
||||
|
||||
try:
|
||||
self.is_monitoring = True
|
||||
self.monitor_thread = threading.Thread(
|
||||
target=self._monitor_logs,
|
||||
args=(tail_lines, follow),
|
||||
daemon=True
|
||||
)
|
||||
self.monitor_thread.start()
|
||||
logger.info(f"Started monitoring logs for container: {self.container_name}")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to start log monitoring: {e}")
|
||||
self.is_monitoring = False
|
||||
return False
|
||||
|
||||
def stop_monitoring(self):
|
||||
"""Stop monitoring Docker logs."""
|
||||
self.is_monitoring = False
|
||||
if self.monitor_thread and self.monitor_thread.is_alive():
|
||||
self.monitor_thread.join(timeout=5)
|
||||
logger.info("Stopped log monitoring")
|
||||
|
||||
def _monitor_logs(self, tail_lines: int, follow: bool):
|
||||
"""Internal method to monitor Docker logs."""
|
||||
try:
|
||||
# Build docker logs command
|
||||
if self.compose_file:
|
||||
cmd = [
|
||||
"docker-compose", "-f", self.compose_file,
|
||||
"logs", "--tail", str(tail_lines)
|
||||
]
|
||||
if follow:
|
||||
cmd.append("-f")
|
||||
cmd.append(self.container_name)
|
||||
else:
|
||||
cmd = [
|
||||
"docker", "logs", "--tail", str(tail_lines)
|
||||
]
|
||||
if follow:
|
||||
cmd.append("-f")
|
||||
cmd.append(self.container_name)
|
||||
|
||||
# Start subprocess
|
||||
process = subprocess.Popen(
|
||||
cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
universal_newlines=True,
|
||||
bufsize=1
|
||||
)
|
||||
|
||||
# Read logs line by line
|
||||
while self.is_monitoring and process.poll() is None:
|
||||
line = process.stdout.readline()
|
||||
if line:
|
||||
self._process_log_line(line.strip())
|
||||
|
||||
# Process any remaining output
|
||||
if process.poll() is not None:
|
||||
remaining_output = process.stdout.read()
|
||||
if remaining_output:
|
||||
for line in remaining_output.split('\n'):
|
||||
if line.strip():
|
||||
self._process_log_line(line.strip())
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in log monitoring: {e}")
|
||||
finally:
|
||||
self.is_monitoring = False
|
||||
|
||||
def _process_log_line(self, line: str):
|
||||
"""Process a single log line."""
|
||||
try:
|
||||
# Parse timestamp and message
|
||||
log_entry = self._parse_log_line(line)
|
||||
|
||||
# Add to buffer
|
||||
self.log_buffer.append(log_entry)
|
||||
if len(self.log_buffer) > self.max_buffer_size:
|
||||
self.log_buffer.pop(0)
|
||||
|
||||
# Check for errors
|
||||
if self._is_error_line(line):
|
||||
log_entry['is_error'] = True
|
||||
logger.warning(f"Error detected in N8N logs: {line}")
|
||||
|
||||
# Call registered callbacks
|
||||
for callback in self.log_callbacks:
|
||||
try:
|
||||
callback(log_entry)
|
||||
except Exception as e:
|
||||
logger.error(f"Error in log callback: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing log line: {e}")
|
||||
|
||||
def _parse_log_line(self, line: str) -> Dict[str, Any]:
|
||||
"""Parse a log line into structured data."""
|
||||
log_entry = {
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'raw_line': line,
|
||||
'message': line,
|
||||
'level': 'INFO',
|
||||
'is_error': False,
|
||||
'workflow_id': None,
|
||||
'execution_id': None,
|
||||
'node_name': None
|
||||
}
|
||||
|
||||
# Try to extract timestamp from log line
|
||||
timestamp_match = re.search(r'(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2})', line)
|
||||
if timestamp_match:
|
||||
log_entry['timestamp'] = timestamp_match.group(1)
|
||||
|
||||
# Extract log level
|
||||
level_match = re.search(r'\b(DEBUG|INFO|WARN|ERROR|FATAL)\b', line, re.IGNORECASE)
|
||||
if level_match:
|
||||
log_entry['level'] = level_match.group(1).upper()
|
||||
|
||||
# Try to extract workflow ID
|
||||
workflow_id_match = re.search(r'workflow[_\s]*id[:\s]*([a-zA-Z0-9-]+)', line, re.IGNORECASE)
|
||||
if workflow_id_match:
|
||||
log_entry['workflow_id'] = workflow_id_match.group(1)
|
||||
|
||||
# Try to extract execution ID
|
||||
execution_id_match = re.search(r'execution[_\s]*id[:\s]*([a-zA-Z0-9-]+)', line, re.IGNORECASE)
|
||||
if execution_id_match:
|
||||
log_entry['execution_id'] = execution_id_match.group(1)
|
||||
|
||||
# Try to extract node name
|
||||
node_match = re.search(r'node[:\s]*(["\']?)([^"\'\\s]+)\1', line, re.IGNORECASE)
|
||||
if node_match:
|
||||
log_entry['node_name'] = node_match.group(2)
|
||||
|
||||
return log_entry
|
||||
|
||||
def _is_error_line(self, line: str) -> bool:
|
||||
"""Check if a log line contains an error."""
|
||||
for pattern in self.error_patterns:
|
||||
if re.search(pattern, line, re.IGNORECASE):
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_recent_logs(self, count: int = 50,
|
||||
error_only: bool = False) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get recent logs from buffer.
|
||||
|
||||
Args:
|
||||
count: Number of log entries to return
|
||||
error_only: If True, return only error logs
|
||||
|
||||
Returns:
|
||||
List of log entries
|
||||
"""
|
||||
logs = self.log_buffer[-count:] if count > 0 else self.log_buffer
|
||||
|
||||
if error_only:
|
||||
logs = [log for log in logs if log.get('is_error', False)]
|
||||
|
||||
return logs
|
||||
|
||||
def get_logs_for_workflow(self, workflow_id: str,
|
||||
since_minutes: int = 60) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get logs for a specific workflow.
|
||||
|
||||
Args:
|
||||
workflow_id: ID of the workflow
|
||||
since_minutes: Look for logs in the last N minutes
|
||||
|
||||
Returns:
|
||||
List of log entries for the workflow
|
||||
"""
|
||||
cutoff_time = datetime.now().timestamp() - (since_minutes * 60)
|
||||
|
||||
workflow_logs = []
|
||||
for log in self.log_buffer:
|
||||
if log.get('workflow_id') == workflow_id:
|
||||
try:
|
||||
log_time = datetime.fromisoformat(log['timestamp'].replace('Z', '+00:00')).timestamp()
|
||||
if log_time >= cutoff_time:
|
||||
workflow_logs.append(log)
|
||||
except:
|
||||
# If timestamp parsing fails, include the log anyway
|
||||
workflow_logs.append(log)
|
||||
|
||||
return workflow_logs
|
||||
|
||||
def get_error_summary(self, since_minutes: int = 60) -> Dict[str, Any]:
|
||||
"""
|
||||
Get a summary of errors in the specified time period.
|
||||
|
||||
Args:
|
||||
since_minutes: Look for errors in the last N minutes
|
||||
|
||||
Returns:
|
||||
Error summary with counts and patterns
|
||||
"""
|
||||
cutoff_time = datetime.now().timestamp() - (since_minutes * 60)
|
||||
|
||||
errors = []
|
||||
for log in self.log_buffer:
|
||||
if log.get('is_error', False):
|
||||
try:
|
||||
log_time = datetime.fromisoformat(log['timestamp'].replace('Z', '+00:00')).timestamp()
|
||||
if log_time >= cutoff_time:
|
||||
errors.append(log)
|
||||
except:
|
||||
errors.append(log)
|
||||
|
||||
# Analyze error patterns
|
||||
error_patterns = {}
|
||||
workflow_errors = {}
|
||||
node_errors = {}
|
||||
|
||||
for error in errors:
|
||||
# Count by message pattern
|
||||
message = error.get('message', '')
|
||||
for pattern in self.error_patterns:
|
||||
if re.search(pattern, message, re.IGNORECASE):
|
||||
error_patterns[pattern] = error_patterns.get(pattern, 0) + 1
|
||||
|
||||
# Count by workflow
|
||||
workflow_id = error.get('workflow_id')
|
||||
if workflow_id:
|
||||
workflow_errors[workflow_id] = workflow_errors.get(workflow_id, 0) + 1
|
||||
|
||||
# Count by node
|
||||
node_name = error.get('node_name')
|
||||
if node_name:
|
||||
node_errors[node_name] = node_errors.get(node_name, 0) + 1
|
||||
|
||||
return {
|
||||
'total_errors': len(errors),
|
||||
'time_period_minutes': since_minutes,
|
||||
'error_patterns': error_patterns,
|
||||
'workflow_errors': workflow_errors,
|
||||
'node_errors': node_errors,
|
||||
'recent_errors': errors[-10:] if errors else [] # Last 10 errors
|
||||
}
|
||||
|
||||
def save_logs_to_file(self, filepath: str,
|
||||
error_only: bool = False,
|
||||
workflow_id: Optional[str] = None) -> str:
|
||||
"""
|
||||
Save logs to a file.
|
||||
|
||||
Args:
|
||||
filepath: Path to save the logs
|
||||
error_only: If True, save only error logs
|
||||
workflow_id: If specified, save only logs for this workflow
|
||||
|
||||
Returns:
|
||||
Path to the saved file
|
||||
"""
|
||||
logs_to_save = self.log_buffer.copy()
|
||||
|
||||
# Filter by workflow if specified
|
||||
if workflow_id:
|
||||
logs_to_save = [log for log in logs_to_save if log.get('workflow_id') == workflow_id]
|
||||
|
||||
# Filter by error status if specified
|
||||
if error_only:
|
||||
logs_to_save = [log for log in logs_to_save if log.get('is_error', False)]
|
||||
|
||||
# Create directory if it doesn't exist
|
||||
Path(filepath).parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Save logs
|
||||
with open(filepath, 'w') as f:
|
||||
for log in logs_to_save:
|
||||
f.write(f"{json.dumps(log)}\n")
|
||||
|
||||
logger.info(f"Saved {len(logs_to_save)} log entries to: {filepath}")
|
||||
return filepath
|
||||
|
||||
def is_container_running(self) -> bool:
|
||||
"""
|
||||
Check if the N8N container is running.
|
||||
|
||||
Returns:
|
||||
True if container is running
|
||||
"""
|
||||
try:
|
||||
if self.compose_file:
|
||||
result = subprocess.run(
|
||||
["docker-compose", "-f", self.compose_file, "ps", "-q", self.container_name],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
else:
|
||||
result = subprocess.run(
|
||||
["docker", "ps", "-q", "-f", f"name={self.container_name}"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
|
||||
return bool(result.stdout.strip())
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking container status: {e}")
|
||||
return False
|
||||
|
||||
def get_container_info(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get information about the N8N container.
|
||||
|
||||
Returns:
|
||||
Container information
|
||||
"""
|
||||
try:
|
||||
if self.compose_file:
|
||||
result = subprocess.run(
|
||||
["docker-compose", "-f", self.compose_file, "ps", self.container_name],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
else:
|
||||
result = subprocess.run(
|
||||
["docker", "inspect", self.container_name],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
|
||||
# Parse JSON output for docker inspect
|
||||
if result.returncode == 0:
|
||||
container_data = json.loads(result.stdout)[0]
|
||||
return {
|
||||
'name': self.container_name,
|
||||
'status': container_data['State']['Status'],
|
||||
'running': container_data['State']['Running'],
|
||||
'started_at': container_data['State']['StartedAt'],
|
||||
'image': container_data['Config']['Image'],
|
||||
'ports': container_data['NetworkSettings']['Ports']
|
||||
}
|
||||
|
||||
return {
|
||||
'name': self.container_name,
|
||||
'status': 'unknown',
|
||||
'info': result.stdout if result.returncode == 0 else result.stderr
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting container info: {e}")
|
||||
return {
|
||||
'name': self.container_name,
|
||||
'status': 'error',
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
def create_docker_log_monitor(container_name: str = "n8n",
|
||||
compose_file: Optional[str] = None):
|
||||
"""Create a Docker log monitor instance."""
|
||||
return DockerLogMonitor(container_name, compose_file)
|
||||
332
claude_n8n/tools/enhanced_workflow_controller.py
Normal file
332
claude_n8n/tools/enhanced_workflow_controller.py
Normal file
@@ -0,0 +1,332 @@
|
||||
"""
|
||||
Enhanced Workflow Controller for N8N
|
||||
|
||||
This module provides enhanced functionality to control N8N workflows with better
|
||||
error handling and alternative methods for workflow management.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import subprocess
|
||||
import time
|
||||
from typing import List, Dict, Any, Optional
|
||||
from .n8n_client import N8NClient
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class EnhancedWorkflowController:
|
||||
"""Enhanced controller for managing N8N workflow states with multiple approaches."""
|
||||
|
||||
def __init__(self, client: Optional[N8NClient] = None):
|
||||
"""
|
||||
Initialize the enhanced workflow controller.
|
||||
|
||||
Args:
|
||||
client: N8N client instance. If None, creates a new one.
|
||||
"""
|
||||
self.client = client or N8NClient()
|
||||
self._original_states = {}
|
||||
|
||||
def force_refresh_workflow(self, workflow_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Force refresh a workflow by downloading the latest version.
|
||||
|
||||
Args:
|
||||
workflow_id: ID of the workflow to refresh
|
||||
|
||||
Returns:
|
||||
Fresh workflow data
|
||||
"""
|
||||
try:
|
||||
# Add cache-busting timestamp to force fresh data
|
||||
import time
|
||||
cache_buster = int(time.time())
|
||||
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
|
||||
logger.info(f"Force refreshed workflow {workflow_id} at {cache_buster}")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'workflow': workflow,
|
||||
'refreshed_at': cache_buster,
|
||||
'last_updated': workflow.get('updatedAt')
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to force refresh workflow {workflow_id}: {e}")
|
||||
return {
|
||||
'success': False,
|
||||
'error': str(e),
|
||||
'workflow_id': workflow_id
|
||||
}
|
||||
|
||||
def stop_workflows_via_docker(self, exclude_ids: Optional[List[str]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Alternative method to stop workflows by restarting N8N container.
|
||||
This forces all workflows to inactive state.
|
||||
|
||||
Args:
|
||||
exclude_ids: Workflows to reactivate after restart
|
||||
|
||||
Returns:
|
||||
Result of the operation
|
||||
"""
|
||||
exclude_ids = exclude_ids or []
|
||||
|
||||
try:
|
||||
print("⚠️ WARNING: This will restart the N8N container")
|
||||
print("All workflows will be stopped, then excluded ones reactivated")
|
||||
|
||||
# Store current active workflows
|
||||
workflows = self.client.list_workflows()
|
||||
currently_active = []
|
||||
|
||||
for wf in workflows:
|
||||
if wf.get('active', False):
|
||||
currently_active.append({
|
||||
'id': wf.get('id'),
|
||||
'name': wf.get('name', 'Unknown')
|
||||
})
|
||||
|
||||
print(f"Currently active workflows: {len(currently_active)}")
|
||||
|
||||
# Restart container (this stops all workflows)
|
||||
print("🔄 Restarting N8N container...")
|
||||
result = subprocess.run(
|
||||
["docker", "restart", "n8n-n8n-1"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=60
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
print("✅ Container restarted successfully")
|
||||
|
||||
# Wait for N8N to start
|
||||
print("⏱️ Waiting for N8N to start...")
|
||||
time.sleep(20)
|
||||
|
||||
# Reactivate excluded workflows
|
||||
reactivated = []
|
||||
failed_reactivation = []
|
||||
|
||||
for workflow_id in exclude_ids:
|
||||
try:
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
updated_workflow = {**workflow, 'active': True}
|
||||
self.client.update_workflow(workflow_id, updated_workflow)
|
||||
|
||||
reactivated.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow.get('name', 'Unknown')
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
failed_reactivation.append({
|
||||
'id': workflow_id,
|
||||
'error': str(e)
|
||||
})
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'method': 'docker_restart',
|
||||
'previously_active': currently_active,
|
||||
'reactivated': reactivated,
|
||||
'failed_reactivation': failed_reactivation,
|
||||
'message': 'Container restarted, workflows reset'
|
||||
}
|
||||
|
||||
else:
|
||||
return {
|
||||
'success': False,
|
||||
'error': f"Docker restart failed: {result.stderr}",
|
||||
'method': 'docker_restart'
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
'success': False,
|
||||
'error': str(e),
|
||||
'method': 'docker_restart'
|
||||
}
|
||||
|
||||
def isolate_workflow_for_debugging(self, debug_workflow_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Isolate a specific workflow for debugging by stopping all others.
|
||||
Uses multiple approaches for maximum effectiveness.
|
||||
|
||||
Args:
|
||||
debug_workflow_id: ID of the workflow to keep active for debugging
|
||||
|
||||
Returns:
|
||||
Result of isolation process
|
||||
"""
|
||||
print(f"🔧 ISOLATING WORKFLOW FOR DEBUGGING: {debug_workflow_id}")
|
||||
print("=" * 60)
|
||||
|
||||
# Step 1: Get current state
|
||||
try:
|
||||
workflows = self.client.list_workflows()
|
||||
active_workflows = [w for w in workflows if w.get('active', False)]
|
||||
|
||||
print(f"Currently active workflows: {len(active_workflows)}")
|
||||
for wf in active_workflows:
|
||||
name = wf.get('name', 'Unknown')
|
||||
wf_id = wf.get('id')
|
||||
marker = "🎯 DEBUG" if wf_id == debug_workflow_id else "🔴 TO STOP"
|
||||
print(f" {marker} {name} ({wf_id})")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to get workflow list: {e}")
|
||||
return {'success': False, 'error': str(e)}
|
||||
|
||||
# Step 2: Try API method first
|
||||
print(f"\\n🔄 Attempting API-based workflow stopping...")
|
||||
api_result = self._stop_workflows_api(exclude_ids=[debug_workflow_id])
|
||||
|
||||
if api_result['stopped_count'] > 0:
|
||||
print(f"✅ API method successful: {api_result['stopped_count']} workflows stopped")
|
||||
return {
|
||||
'success': True,
|
||||
'method': 'api',
|
||||
'result': api_result,
|
||||
'isolated_workflow': debug_workflow_id
|
||||
}
|
||||
|
||||
# Step 3: If API fails, offer docker method
|
||||
print(f"⚠️ API method failed or incomplete")
|
||||
print(f"Stopped: {api_result['stopped_count']}, Failed: {api_result['failed_count']}")
|
||||
|
||||
if api_result['failed_count'] > 0:
|
||||
print("\\n💡 Alternative: Use Docker restart method?")
|
||||
print("This will restart N8N container and stop ALL workflows,")
|
||||
print(f"then reactivate only the debug workflow: {debug_workflow_id}")
|
||||
|
||||
return {
|
||||
'success': False,
|
||||
'method': 'api_failed',
|
||||
'api_result': api_result,
|
||||
'suggestion': 'use_docker_restart',
|
||||
'docker_command': f"controller.stop_workflows_via_docker(['{debug_workflow_id}'])"
|
||||
}
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'method': 'api_partial',
|
||||
'result': api_result,
|
||||
'isolated_workflow': debug_workflow_id
|
||||
}
|
||||
|
||||
def _stop_workflows_api(self, exclude_ids: Optional[List[str]] = None) -> Dict[str, Any]:
|
||||
"""Internal API-based workflow stopping."""
|
||||
exclude_ids = exclude_ids or []
|
||||
workflows = self.client.list_workflows()
|
||||
|
||||
stopped = []
|
||||
failed = []
|
||||
skipped = []
|
||||
|
||||
for workflow in workflows:
|
||||
workflow_id = workflow.get('id')
|
||||
workflow_name = workflow.get('name', 'Unknown')
|
||||
is_active = workflow.get('active', False)
|
||||
|
||||
if workflow_id in exclude_ids:
|
||||
skipped.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name,
|
||||
'reason': 'excluded'
|
||||
})
|
||||
continue
|
||||
|
||||
if not is_active:
|
||||
skipped.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name,
|
||||
'reason': 'already_inactive'
|
||||
})
|
||||
continue
|
||||
|
||||
# Store original state
|
||||
self._original_states[workflow_id] = {
|
||||
'active': is_active,
|
||||
'name': workflow_name
|
||||
}
|
||||
|
||||
try:
|
||||
updated_workflow = {**workflow, 'active': False}
|
||||
self.client.update_workflow(workflow_id, updated_workflow)
|
||||
|
||||
stopped.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name,
|
||||
'was_active': is_active
|
||||
})
|
||||
logger.info(f"Stopped workflow: {workflow_name} ({workflow_id})")
|
||||
|
||||
except Exception as e:
|
||||
failed.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name,
|
||||
'error': str(e)
|
||||
})
|
||||
logger.error(f"Failed to stop workflow {workflow_name}: {e}")
|
||||
|
||||
return {
|
||||
'stopped': stopped,
|
||||
'failed': failed,
|
||||
'skipped': skipped,
|
||||
'stopped_count': len(stopped),
|
||||
'failed_count': len(failed),
|
||||
'skipped_count': len(skipped)
|
||||
}
|
||||
|
||||
def verify_workflow_isolation(self, debug_workflow_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Verify that only the debug workflow is active.
|
||||
|
||||
Args:
|
||||
debug_workflow_id: ID of the workflow that should be active
|
||||
|
||||
Returns:
|
||||
Verification result
|
||||
"""
|
||||
try:
|
||||
workflows = self.client.list_workflows()
|
||||
active_workflows = [w for w in workflows if w.get('active', False)]
|
||||
|
||||
debug_workflow_active = False
|
||||
other_active_workflows = []
|
||||
|
||||
for wf in active_workflows:
|
||||
wf_id = wf.get('id')
|
||||
wf_name = wf.get('name', 'Unknown')
|
||||
|
||||
if wf_id == debug_workflow_id:
|
||||
debug_workflow_active = True
|
||||
else:
|
||||
other_active_workflows.append({
|
||||
'id': wf_id,
|
||||
'name': wf_name
|
||||
})
|
||||
|
||||
is_isolated = debug_workflow_active and len(other_active_workflows) == 0
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'is_isolated': is_isolated,
|
||||
'debug_workflow_active': debug_workflow_active,
|
||||
'other_active_count': len(other_active_workflows),
|
||||
'other_active_workflows': other_active_workflows,
|
||||
'total_active': len(active_workflows)
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
'success': False,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
def create_enhanced_workflow_controller():
|
||||
"""Create an enhanced workflow controller instance."""
|
||||
return EnhancedWorkflowController()
|
||||
405
claude_n8n/tools/execution_monitor.py
Normal file
405
claude_n8n/tools/execution_monitor.py
Normal file
@@ -0,0 +1,405 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Execution Monitor - Tools for monitoring N8N workflow executions
|
||||
Provides real-time monitoring, alerting, and execution management
|
||||
"""
|
||||
|
||||
import time
|
||||
import json
|
||||
from typing import Dict, List, Optional, Callable, Any
|
||||
from datetime import datetime, timedelta
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from threading import Thread, Event
|
||||
import logging
|
||||
|
||||
|
||||
class ExecutionStatus(Enum):
|
||||
"""Execution status enumeration"""
|
||||
RUNNING = "running"
|
||||
SUCCESS = "success"
|
||||
ERROR = "error"
|
||||
CANCELLED = "cancelled"
|
||||
WAITING = "waiting"
|
||||
|
||||
|
||||
@dataclass
|
||||
class ExecutionEvent:
|
||||
"""Represents an execution event"""
|
||||
execution_id: str
|
||||
workflow_id: str
|
||||
status: ExecutionStatus
|
||||
timestamp: datetime
|
||||
duration: Optional[float] = None
|
||||
error_message: Optional[str] = None
|
||||
node_data: Optional[Dict] = None
|
||||
|
||||
|
||||
class ExecutionMonitor:
|
||||
"""Monitors N8N workflow executions and provides real-time insights"""
|
||||
|
||||
def __init__(self, n8n_client, poll_interval: int = 5):
|
||||
"""Initialize execution monitor"""
|
||||
self.client = n8n_client
|
||||
self.poll_interval = poll_interval
|
||||
self.monitoring = False
|
||||
self.stop_event = Event()
|
||||
self.callbacks = {
|
||||
'on_success': [],
|
||||
'on_error': [],
|
||||
'on_start': [],
|
||||
'on_complete': []
|
||||
}
|
||||
self.tracked_executions = {}
|
||||
self.logger = self._setup_logger()
|
||||
|
||||
def _setup_logger(self) -> logging.Logger:
|
||||
"""Setup logging for the monitor"""
|
||||
logger = logging.getLogger('N8NExecutionMonitor')
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
if not logger.handlers:
|
||||
handler = logging.StreamHandler()
|
||||
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
|
||||
return logger
|
||||
|
||||
def add_callback(self, event_type: str, callback: Callable[[ExecutionEvent], None]):
|
||||
"""Add callback for execution events"""
|
||||
if event_type in self.callbacks:
|
||||
self.callbacks[event_type].append(callback)
|
||||
else:
|
||||
raise ValueError(f"Invalid event type: {event_type}")
|
||||
|
||||
def start_monitoring(self, workflow_ids: Optional[List[str]] = None):
|
||||
"""Start monitoring workflow executions"""
|
||||
if self.monitoring:
|
||||
self.logger.warning("Monitoring already running")
|
||||
return
|
||||
|
||||
self.monitoring = True
|
||||
self.stop_event.clear()
|
||||
|
||||
monitor_thread = Thread(target=self._monitor_loop, args=(workflow_ids,))
|
||||
monitor_thread.daemon = True
|
||||
monitor_thread.start()
|
||||
|
||||
self.logger.info("Execution monitoring started")
|
||||
|
||||
def stop_monitoring(self):
|
||||
"""Stop monitoring workflow executions"""
|
||||
if not self.monitoring:
|
||||
return
|
||||
|
||||
self.monitoring = False
|
||||
self.stop_event.set()
|
||||
self.logger.info("Execution monitoring stopped")
|
||||
|
||||
def _monitor_loop(self, workflow_ids: Optional[List[str]]):
|
||||
"""Main monitoring loop"""
|
||||
while self.monitoring and not self.stop_event.is_set():
|
||||
try:
|
||||
self._check_executions(workflow_ids)
|
||||
time.sleep(self.poll_interval)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in monitoring loop: {e}")
|
||||
time.sleep(self.poll_interval)
|
||||
|
||||
def _check_executions(self, workflow_ids: Optional[List[str]]):
|
||||
"""Check for new or updated executions"""
|
||||
try:
|
||||
# Get recent executions
|
||||
executions = self.client.get_executions(limit=50)
|
||||
|
||||
for execution in executions:
|
||||
execution_id = execution.get('id')
|
||||
workflow_id = execution.get('workflowId')
|
||||
|
||||
# Filter by workflow IDs if specified
|
||||
if workflow_ids and workflow_id not in workflow_ids:
|
||||
continue
|
||||
|
||||
# Check if this is a new or updated execution
|
||||
if self._is_execution_updated(execution):
|
||||
event = self._create_execution_event(execution)
|
||||
self._handle_execution_event(event)
|
||||
|
||||
# Update tracked executions
|
||||
self.tracked_executions[execution_id] = {
|
||||
'status': execution.get('status'),
|
||||
'last_updated': datetime.now()
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error checking executions: {e}")
|
||||
|
||||
def _is_execution_updated(self, execution: Dict) -> bool:
|
||||
"""Check if execution is new or has been updated"""
|
||||
execution_id = execution.get('id')
|
||||
current_status = execution.get('status')
|
||||
|
||||
if execution_id not in self.tracked_executions:
|
||||
return True
|
||||
|
||||
tracked_status = self.tracked_executions[execution_id]['status']
|
||||
return current_status != tracked_status
|
||||
|
||||
def _create_execution_event(self, execution: Dict) -> ExecutionEvent:
|
||||
"""Create ExecutionEvent from execution data"""
|
||||
execution_id = execution.get('id')
|
||||
workflow_id = execution.get('workflowId')
|
||||
status = ExecutionStatus(execution.get('status'))
|
||||
|
||||
# Calculate duration if available
|
||||
duration = None
|
||||
start_time = execution.get('startedAt')
|
||||
finish_time = execution.get('finishedAt')
|
||||
|
||||
if start_time and finish_time:
|
||||
start_dt = datetime.fromisoformat(start_time.replace('Z', '+00:00'))
|
||||
finish_dt = datetime.fromisoformat(finish_time.replace('Z', '+00:00'))
|
||||
duration = (finish_dt - start_dt).total_seconds()
|
||||
|
||||
# Extract error message if available
|
||||
error_message = None
|
||||
if status == ExecutionStatus.ERROR:
|
||||
data = execution.get('data', {})
|
||||
if 'resultData' in data and 'error' in data['resultData']:
|
||||
error_message = data['resultData']['error'].get('message')
|
||||
|
||||
return ExecutionEvent(
|
||||
execution_id=execution_id,
|
||||
workflow_id=workflow_id,
|
||||
status=status,
|
||||
timestamp=datetime.now(),
|
||||
duration=duration,
|
||||
error_message=error_message,
|
||||
node_data=execution.get('data')
|
||||
)
|
||||
|
||||
def _handle_execution_event(self, event: ExecutionEvent):
|
||||
"""Handle execution event and trigger callbacks"""
|
||||
self.logger.info(f"Execution {event.execution_id} status: {event.status.value}")
|
||||
|
||||
# Trigger appropriate callbacks
|
||||
if event.status == ExecutionStatus.SUCCESS:
|
||||
for callback in self.callbacks['on_success']:
|
||||
try:
|
||||
callback(event)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in success callback: {e}")
|
||||
|
||||
elif event.status == ExecutionStatus.ERROR:
|
||||
for callback in self.callbacks['on_error']:
|
||||
try:
|
||||
callback(event)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in error callback: {e}")
|
||||
|
||||
elif event.status == ExecutionStatus.RUNNING:
|
||||
for callback in self.callbacks['on_start']:
|
||||
try:
|
||||
callback(event)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in start callback: {e}")
|
||||
|
||||
# Always trigger complete callbacks for finished executions
|
||||
if event.status in [ExecutionStatus.SUCCESS, ExecutionStatus.ERROR, ExecutionStatus.CANCELLED]:
|
||||
for callback in self.callbacks['on_complete']:
|
||||
try:
|
||||
callback(event)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in complete callback: {e}")
|
||||
|
||||
def execute_and_monitor(self, workflow_id: str, test_data: Optional[Dict] = None, timeout: int = 300) -> ExecutionEvent:
|
||||
"""Execute workflow and monitor until completion"""
|
||||
try:
|
||||
# Start execution
|
||||
result = self.client.execute_workflow(workflow_id, test_data)
|
||||
execution_id = result.get('id')
|
||||
|
||||
if not execution_id:
|
||||
raise Exception("Failed to get execution ID from workflow execution")
|
||||
|
||||
self.logger.info(f"Started execution {execution_id} for workflow {workflow_id}")
|
||||
|
||||
# Monitor execution until completion
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < timeout:
|
||||
execution = self.client.get_execution(execution_id)
|
||||
status = execution.get('status')
|
||||
|
||||
if status in ['success', 'error', 'cancelled']:
|
||||
event = self._create_execution_event(execution)
|
||||
self.logger.info(f"Execution {execution_id} completed with status: {status}")
|
||||
return event
|
||||
|
||||
time.sleep(2) # Check every 2 seconds
|
||||
|
||||
# Timeout reached
|
||||
raise TimeoutError(f"Execution {execution_id} did not complete within {timeout} seconds")
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error executing and monitoring workflow: {e}")
|
||||
raise
|
||||
|
||||
def get_execution_summary(self, hours: int = 24) -> Dict:
|
||||
"""Get execution summary for the specified time period"""
|
||||
try:
|
||||
# Get executions from the specified time period
|
||||
executions = self.client.get_executions(limit=200)
|
||||
|
||||
# Filter by time period
|
||||
cutoff_time = datetime.now() - timedelta(hours=hours)
|
||||
recent_executions = []
|
||||
|
||||
for execution in executions:
|
||||
started_at = execution.get('startedAt')
|
||||
if started_at:
|
||||
exec_time = datetime.fromisoformat(started_at.replace('Z', '+00:00'))
|
||||
if exec_time > cutoff_time:
|
||||
recent_executions.append(execution)
|
||||
|
||||
# Calculate summary statistics
|
||||
total = len(recent_executions)
|
||||
successful = len([e for e in recent_executions if e.get('status') == 'success'])
|
||||
failed = len([e for e in recent_executions if e.get('status') == 'error'])
|
||||
running = len([e for e in recent_executions if e.get('status') == 'running'])
|
||||
|
||||
# Calculate average duration
|
||||
durations = []
|
||||
for execution in recent_executions:
|
||||
start_time = execution.get('startedAt')
|
||||
finish_time = execution.get('finishedAt')
|
||||
if start_time and finish_time:
|
||||
start_dt = datetime.fromisoformat(start_time.replace('Z', '+00:00'))
|
||||
finish_dt = datetime.fromisoformat(finish_time.replace('Z', '+00:00'))
|
||||
durations.append((finish_dt - start_dt).total_seconds())
|
||||
|
||||
avg_duration = sum(durations) / len(durations) if durations else 0
|
||||
|
||||
# Group by workflow
|
||||
workflow_stats = {}
|
||||
for execution in recent_executions:
|
||||
workflow_id = execution.get('workflowId')
|
||||
if workflow_id not in workflow_stats:
|
||||
workflow_stats[workflow_id] = {'total': 0, 'success': 0, 'error': 0}
|
||||
|
||||
workflow_stats[workflow_id]['total'] += 1
|
||||
status = execution.get('status')
|
||||
if status == 'success':
|
||||
workflow_stats[workflow_id]['success'] += 1
|
||||
elif status == 'error':
|
||||
workflow_stats[workflow_id]['error'] += 1
|
||||
|
||||
return {
|
||||
'time_period_hours': hours,
|
||||
'total_executions': total,
|
||||
'successful_executions': successful,
|
||||
'failed_executions': failed,
|
||||
'running_executions': running,
|
||||
'success_rate': (successful / total * 100) if total > 0 else 0,
|
||||
'average_duration_seconds': avg_duration,
|
||||
'workflow_statistics': workflow_stats,
|
||||
'executions': recent_executions
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error generating execution summary: {e}")
|
||||
raise
|
||||
|
||||
def wait_for_execution_completion(self, execution_id: str, timeout: int = 300) -> ExecutionEvent:
|
||||
"""Wait for a specific execution to complete"""
|
||||
start_time = time.time()
|
||||
|
||||
while time.time() - start_time < timeout:
|
||||
execution = self.client.get_execution(execution_id)
|
||||
status = execution.get('status')
|
||||
|
||||
if status in ['success', 'error', 'cancelled']:
|
||||
return self._create_execution_event(execution)
|
||||
|
||||
time.sleep(2)
|
||||
|
||||
raise TimeoutError(f"Execution {execution_id} did not complete within {timeout} seconds")
|
||||
|
||||
def cancel_execution(self, execution_id: str) -> bool:
|
||||
"""Cancel a running execution"""
|
||||
try:
|
||||
# N8N API might have a cancel endpoint - this would need to be implemented
|
||||
# based on the actual API capabilities
|
||||
self.logger.warning(f"Cancel execution {execution_id} - implement based on N8N API")
|
||||
return True
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error cancelling execution {execution_id}: {e}")
|
||||
return False
|
||||
|
||||
def get_execution_logs(self, execution_id: str) -> Dict:
|
||||
"""Get detailed logs for an execution"""
|
||||
try:
|
||||
execution = self.client.get_execution(execution_id)
|
||||
|
||||
logs = {
|
||||
'execution_id': execution_id,
|
||||
'status': execution.get('status'),
|
||||
'workflow_id': execution.get('workflowId'),
|
||||
'started_at': execution.get('startedAt'),
|
||||
'finished_at': execution.get('finishedAt'),
|
||||
'node_logs': [],
|
||||
'errors': []
|
||||
}
|
||||
|
||||
# Extract node-level logs if available
|
||||
data = execution.get('data', {})
|
||||
if 'resultData' in data:
|
||||
result_data = data['resultData']
|
||||
|
||||
# Extract errors
|
||||
if 'error' in result_data:
|
||||
logs['errors'].append(result_data['error'])
|
||||
|
||||
# Extract node execution data
|
||||
if 'runData' in result_data:
|
||||
for node_name, node_runs in result_data['runData'].items():
|
||||
for run in node_runs:
|
||||
logs['node_logs'].append({
|
||||
'node': node_name,
|
||||
'start_time': run.get('startTime'),
|
||||
'execution_time': run.get('executionTime'),
|
||||
'data': run.get('data', {}),
|
||||
'error': run.get('error')
|
||||
})
|
||||
|
||||
return logs
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error getting execution logs: {e}")
|
||||
raise
|
||||
|
||||
|
||||
# Utility functions for common monitoring scenarios
|
||||
|
||||
def create_simple_monitor(n8n_client) -> ExecutionMonitor:
|
||||
"""Create a simple execution monitor with basic logging"""
|
||||
monitor = ExecutionMonitor(n8n_client)
|
||||
|
||||
def log_success(event: ExecutionEvent):
|
||||
print(f"✅ Execution {event.execution_id} completed successfully in {event.duration:.2f}s")
|
||||
|
||||
def log_error(event: ExecutionEvent):
|
||||
print(f"❌ Execution {event.execution_id} failed: {event.error_message}")
|
||||
|
||||
def log_start(event: ExecutionEvent):
|
||||
print(f"🚀 Execution {event.execution_id} started")
|
||||
|
||||
monitor.add_callback('on_success', log_success)
|
||||
monitor.add_callback('on_error', log_error)
|
||||
monitor.add_callback('on_start', log_start)
|
||||
|
||||
return monitor
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Execution Monitor initialized successfully.")
|
||||
280
claude_n8n/tools/improved_n8n_client.py
Normal file
280
claude_n8n/tools/improved_n8n_client.py
Normal file
@@ -0,0 +1,280 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Improved N8N API Client - Enhanced error logging and execution analysis
|
||||
"""
|
||||
|
||||
import json
|
||||
import requests
|
||||
import time
|
||||
from typing import Dict, List, Optional, Any
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
@dataclass
|
||||
class ExecutionError:
|
||||
"""Detailed execution error information"""
|
||||
execution_id: str
|
||||
node_name: str
|
||||
error_message: str
|
||||
error_type: str
|
||||
stack_trace: Optional[List[str]] = None
|
||||
timestamp: Optional[str] = None
|
||||
|
||||
|
||||
class ImprovedN8NClient:
|
||||
"""Enhanced N8N client with better error handling and logging"""
|
||||
|
||||
def __init__(self, config_path: str = "n8n_api_credentials.json"):
|
||||
"""Initialize enhanced N8N client"""
|
||||
self.config = self._load_config(config_path)
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update(self.config['headers'])
|
||||
|
||||
def _load_config(self, config_path: str) -> Dict:
|
||||
"""Load N8N configuration from JSON file"""
|
||||
with open(config_path, 'r') as f:
|
||||
return json.load(f)
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, data: Optional[Dict] = None) -> Dict:
|
||||
"""Make authenticated request to N8N API with enhanced error handling"""
|
||||
url = f"{self.config['api_url'].rstrip('/')}/{endpoint.lstrip('/')}"
|
||||
|
||||
try:
|
||||
if method.upper() == 'GET':
|
||||
response = self.session.get(url, params=data)
|
||||
elif method.upper() == 'POST':
|
||||
response = self.session.post(url, json=data)
|
||||
elif method.upper() == 'PUT':
|
||||
response = self.session.put(url, json=data)
|
||||
elif method.upper() == 'DELETE':
|
||||
response = self.session.delete(url)
|
||||
else:
|
||||
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||
|
||||
response.raise_for_status()
|
||||
return response.json() if response.content else {}
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
print(f"API Request failed: {method} {url}")
|
||||
print(f"Error: {e}")
|
||||
if hasattr(e, 'response') and e.response is not None:
|
||||
print(f"Response status: {e.response.status_code}")
|
||||
print(f"Response text: {e.response.text[:500]}")
|
||||
raise
|
||||
|
||||
def get_execution_with_logs(self, execution_id: str) -> Dict:
|
||||
"""Get execution with detailed logging information"""
|
||||
try:
|
||||
# Get basic execution data
|
||||
execution = self._make_request('GET', f'/executions/{execution_id}')
|
||||
|
||||
# Try to get additional log data if available
|
||||
# Some N8N instances may have additional endpoints for logs
|
||||
try:
|
||||
logs = self._make_request('GET', f'/executions/{execution_id}/logs')
|
||||
execution['detailed_logs'] = logs
|
||||
except:
|
||||
# Logs endpoint may not exist, continue without it
|
||||
pass
|
||||
|
||||
return execution
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error getting execution logs: {e}")
|
||||
raise
|
||||
|
||||
def analyze_execution_errors(self, execution_id: str) -> List[ExecutionError]:
|
||||
"""Analyze execution and extract detailed error information"""
|
||||
errors = []
|
||||
|
||||
try:
|
||||
execution = self.get_execution_with_logs(execution_id)
|
||||
|
||||
# Check execution status
|
||||
if execution.get('status') != 'error':
|
||||
return errors # No errors to analyze
|
||||
|
||||
# Extract errors from execution data
|
||||
if 'data' in execution:
|
||||
data = execution['data']
|
||||
|
||||
# Check for global errors
|
||||
if 'resultData' in data:
|
||||
result_data = data['resultData']
|
||||
|
||||
# Global execution error
|
||||
if 'error' in result_data:
|
||||
global_error = result_data['error']
|
||||
error = ExecutionError(
|
||||
execution_id=execution_id,
|
||||
node_name="GLOBAL",
|
||||
error_message=str(global_error.get('message', global_error)),
|
||||
error_type=global_error.get('type', 'unknown'),
|
||||
timestamp=execution.get('stoppedAt')
|
||||
)
|
||||
|
||||
# Extract stack trace if available
|
||||
if 'stack' in global_error:
|
||||
error.stack_trace = global_error['stack'].split('\n')
|
||||
elif 'stackTrace' in global_error:
|
||||
error.stack_trace = global_error['stackTrace']
|
||||
|
||||
errors.append(error)
|
||||
|
||||
# Node-specific errors
|
||||
if 'runData' in result_data:
|
||||
for node_name, node_runs in result_data['runData'].items():
|
||||
for run_index, run in enumerate(node_runs):
|
||||
if 'error' in run:
|
||||
node_error = run['error']
|
||||
|
||||
error = ExecutionError(
|
||||
execution_id=execution_id,
|
||||
node_name=node_name,
|
||||
error_message=str(node_error.get('message', node_error)),
|
||||
error_type=node_error.get('type', 'unknown'),
|
||||
timestamp=run.get('startTime')
|
||||
)
|
||||
|
||||
# Extract stack trace
|
||||
if 'stack' in node_error:
|
||||
error.stack_trace = node_error['stack'].split('\n')
|
||||
elif 'stackTrace' in node_error:
|
||||
error.stack_trace = node_error['stackTrace']
|
||||
|
||||
errors.append(error)
|
||||
|
||||
return errors
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error analyzing execution errors: {e}")
|
||||
return errors
|
||||
|
||||
def get_recent_errors(self, workflow_id: str, limit: int = 10) -> List[ExecutionError]:
|
||||
"""Get recent errors for a workflow"""
|
||||
all_errors = []
|
||||
|
||||
try:
|
||||
# Get recent executions
|
||||
executions = self._make_request('GET', '/executions', {
|
||||
'workflowId': workflow_id,
|
||||
'limit': limit,
|
||||
'includeData': True
|
||||
})
|
||||
|
||||
execution_list = executions.get('data', executions) if isinstance(executions, dict) else executions
|
||||
|
||||
for execution in execution_list:
|
||||
if execution.get('status') == 'error':
|
||||
exec_id = execution.get('id')
|
||||
if exec_id:
|
||||
errors = self.analyze_execution_errors(exec_id)
|
||||
all_errors.extend(errors)
|
||||
|
||||
return all_errors
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error getting recent errors: {e}")
|
||||
return all_errors
|
||||
|
||||
def find_template_errors(self, workflow_id: str) -> List[ExecutionError]:
|
||||
"""Find specific template-related errors"""
|
||||
all_errors = self.get_recent_errors(workflow_id, limit=20)
|
||||
|
||||
template_errors = []
|
||||
for error in all_errors:
|
||||
error_msg = error.error_message.lower()
|
||||
if any(keyword in error_msg for keyword in ['template', 'single', 'brace', 'f-string']):
|
||||
template_errors.append(error)
|
||||
|
||||
return template_errors
|
||||
|
||||
def execute_workflow_with_monitoring(self, workflow_id: str, test_data: Optional[Dict] = None, timeout: int = 300) -> Dict:
|
||||
"""Execute workflow and monitor for detailed error information"""
|
||||
try:
|
||||
# Start execution
|
||||
result = self._make_request('POST', f'/workflows/{workflow_id}/execute', test_data)
|
||||
|
||||
# Extract execution ID
|
||||
exec_id = None
|
||||
if 'data' in result and 'id' in result['data']:
|
||||
exec_id = result['data']['id']
|
||||
elif 'id' in result:
|
||||
exec_id = result['id']
|
||||
|
||||
if not exec_id:
|
||||
print("Warning: Could not get execution ID")
|
||||
return result
|
||||
|
||||
print(f"Started execution {exec_id}, monitoring...")
|
||||
|
||||
# Monitor execution
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < timeout:
|
||||
execution = self.get_execution_with_logs(exec_id)
|
||||
status = execution.get('status')
|
||||
|
||||
if status in ['success', 'error', 'cancelled']:
|
||||
if status == 'error':
|
||||
print(f"Execution {exec_id} failed, analyzing errors...")
|
||||
errors = self.analyze_execution_errors(exec_id)
|
||||
execution['analyzed_errors'] = errors
|
||||
|
||||
# Print detailed error information
|
||||
for error in errors:
|
||||
print(f"\nError in {error.node_name}:")
|
||||
print(f" Message: {error.error_message}")
|
||||
print(f" Type: {error.error_type}")
|
||||
if error.stack_trace:
|
||||
print(f" Stack trace: {error.stack_trace[:3]}") # First 3 lines
|
||||
|
||||
return execution
|
||||
|
||||
time.sleep(2)
|
||||
|
||||
raise TimeoutError(f"Execution {exec_id} did not complete within {timeout} seconds")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in monitored execution: {e}")
|
||||
raise
|
||||
|
||||
# Include essential methods from original client
|
||||
def list_workflows(self) -> List[Dict]:
|
||||
"""Get list of all workflows"""
|
||||
response = self._make_request('GET', '/workflows')
|
||||
return response.get('data', response) if isinstance(response, dict) else response
|
||||
|
||||
def get_workflow(self, workflow_id: str) -> Dict:
|
||||
"""Get specific workflow by ID"""
|
||||
return self._make_request('GET', f'/workflows/{workflow_id}')
|
||||
|
||||
def update_workflow(self, workflow_id: str, workflow_data: Dict) -> Dict:
|
||||
"""Update existing workflow"""
|
||||
return self._make_request('PUT', f'/workflows/{workflow_id}', workflow_data)
|
||||
|
||||
def get_executions(self, workflow_id: Optional[str] = None, limit: int = 20) -> Dict:
|
||||
"""Get workflow executions"""
|
||||
params = {"limit": limit, "includeData": True}
|
||||
if workflow_id:
|
||||
params["workflowId"] = workflow_id
|
||||
return self._make_request('GET', '/executions', params)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test the improved client
|
||||
try:
|
||||
client = ImprovedN8NClient()
|
||||
print("Enhanced N8N client initialized successfully")
|
||||
|
||||
# Test with Matrix workflow
|
||||
template_errors = client.find_template_errors('w6Sz5trluur5qdMj')
|
||||
if template_errors:
|
||||
print(f"\nFound {len(template_errors)} template errors:")
|
||||
for error in template_errors:
|
||||
print(f"- {error.node_name}: {error.error_message}")
|
||||
else:
|
||||
print("No template errors found")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Failed to initialize enhanced client: {e}")
|
||||
456
claude_n8n/tools/manual_trigger_manager.py
Normal file
456
claude_n8n/tools/manual_trigger_manager.py
Normal file
@@ -0,0 +1,456 @@
|
||||
"""
|
||||
Manual Trigger Manager for N8N Workflows
|
||||
|
||||
This module provides functionality to manage manual triggers for N8N workflows,
|
||||
including webhook-based triggers and manual execution capabilities.
|
||||
"""
|
||||
|
||||
import json
|
||||
import uuid
|
||||
import logging
|
||||
from typing import Dict, List, Any, Optional
|
||||
from .n8n_client import N8NClient
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class ManualTriggerManager:
|
||||
"""Manager for creating and handling manual triggers in N8N workflows."""
|
||||
|
||||
def __init__(self, client: Optional[N8NClient] = None):
|
||||
"""
|
||||
Initialize the manual trigger manager.
|
||||
|
||||
Args:
|
||||
client: N8N client instance. If None, creates a new one.
|
||||
"""
|
||||
self.client = client or N8NClient()
|
||||
|
||||
def add_manual_trigger_to_workflow(self, workflow_id: str,
|
||||
trigger_type: str = "manual") -> Dict[str, Any]:
|
||||
"""
|
||||
Add a manual trigger to an existing workflow.
|
||||
|
||||
Args:
|
||||
workflow_id: ID of the workflow to modify
|
||||
trigger_type: Type of trigger ('manual', 'webhook', 'http')
|
||||
|
||||
Returns:
|
||||
Result of the operation including trigger details
|
||||
"""
|
||||
try:
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
workflow_name = workflow.get('name', 'Unknown')
|
||||
|
||||
# Check if workflow already has the specified trigger type
|
||||
nodes = workflow.get('nodes', [])
|
||||
existing_trigger = self._find_trigger_node(nodes, trigger_type)
|
||||
|
||||
if existing_trigger:
|
||||
return {
|
||||
'success': True,
|
||||
'message': f"Workflow {workflow_name} already has a {trigger_type} trigger",
|
||||
'trigger_node': existing_trigger,
|
||||
'added_new_trigger': False
|
||||
}
|
||||
|
||||
# Create the appropriate trigger node
|
||||
trigger_node = self._create_trigger_node(trigger_type, workflow_id)
|
||||
|
||||
# Add trigger node to workflow
|
||||
updated_nodes = [trigger_node] + nodes
|
||||
|
||||
# Update connections to connect trigger to first existing node
|
||||
connections = workflow.get('connections', {})
|
||||
if nodes and not self._has_trigger_connections(connections):
|
||||
first_node_name = nodes[0].get('name')
|
||||
if first_node_name:
|
||||
trigger_name = trigger_node['name']
|
||||
if trigger_name not in connections:
|
||||
connections[trigger_name] = {}
|
||||
connections[trigger_name]['main'] = [[{
|
||||
'node': first_node_name,
|
||||
'type': 'main',
|
||||
'index': 0
|
||||
}]]
|
||||
|
||||
# Update workflow
|
||||
updated_workflow = {
|
||||
**workflow,
|
||||
'nodes': updated_nodes,
|
||||
'connections': connections
|
||||
}
|
||||
|
||||
result = self.client.update_workflow(workflow_id, updated_workflow)
|
||||
|
||||
logger.info(f"Added {trigger_type} trigger to workflow: {workflow_name} ({workflow_id})")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'message': f"Successfully added {trigger_type} trigger to {workflow_name}",
|
||||
'trigger_node': trigger_node,
|
||||
'workflow': result,
|
||||
'added_new_trigger': True
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to add {trigger_type} trigger to workflow {workflow_id}: {e}"
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'success': False,
|
||||
'error': error_msg,
|
||||
'workflow_id': workflow_id
|
||||
}
|
||||
|
||||
def create_webhook_trigger_workflow(self, workflow_name: str,
|
||||
webhook_path: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a new workflow with a webhook trigger.
|
||||
|
||||
Args:
|
||||
workflow_name: Name for the new workflow
|
||||
webhook_path: Custom webhook path (if None, generates random)
|
||||
|
||||
Returns:
|
||||
Created workflow with webhook details
|
||||
"""
|
||||
try:
|
||||
if not webhook_path:
|
||||
webhook_path = f"test-webhook-{str(uuid.uuid4())[:8]}"
|
||||
|
||||
# Create webhook trigger node
|
||||
webhook_node = {
|
||||
"id": str(uuid.uuid4()),
|
||||
"name": "Webhook Trigger",
|
||||
"type": "n8n-nodes-base.webhook",
|
||||
"typeVersion": 1,
|
||||
"position": [100, 100],
|
||||
"webhookId": str(uuid.uuid4()),
|
||||
"parameters": {
|
||||
"httpMethod": "POST",
|
||||
"path": webhook_path,
|
||||
"responseMode": "responseNode",
|
||||
"options": {
|
||||
"noResponseBody": False
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Create a simple response node
|
||||
response_node = {
|
||||
"id": str(uuid.uuid4()),
|
||||
"name": "Response",
|
||||
"type": "n8n-nodes-base.respondToWebhook",
|
||||
"typeVersion": 1,
|
||||
"position": [300, 100],
|
||||
"parameters": {
|
||||
"options": {}
|
||||
}
|
||||
}
|
||||
|
||||
# Create workflow structure
|
||||
workflow_data = {
|
||||
"name": workflow_name,
|
||||
"active": False, # Start inactive for testing
|
||||
"nodes": [webhook_node, response_node],
|
||||
"connections": {
|
||||
"Webhook Trigger": {
|
||||
"main": [[{
|
||||
"node": "Response",
|
||||
"type": "main",
|
||||
"index": 0
|
||||
}]]
|
||||
}
|
||||
},
|
||||
"settings": {
|
||||
"timezone": "UTC"
|
||||
}
|
||||
}
|
||||
|
||||
# Create the workflow
|
||||
created_workflow = self.client.create_workflow(workflow_data)
|
||||
|
||||
# Get N8N base URL for webhook URL construction
|
||||
webhook_url = f"{self.client.base_url.replace('/api/v1', '')}/webhook-test/{webhook_path}"
|
||||
|
||||
logger.info(f"Created webhook trigger workflow: {workflow_name}")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'workflow': created_workflow,
|
||||
'webhook_url': webhook_url,
|
||||
'webhook_path': webhook_path,
|
||||
'test_command': f"curl -X POST {webhook_url} -H 'Content-Type: application/json' -d '{{}}'",
|
||||
'message': f"Created workflow {workflow_name} with webhook trigger"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to create webhook trigger workflow: {e}"
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'success': False,
|
||||
'error': error_msg
|
||||
}
|
||||
|
||||
def execute_workflow_manually(self, workflow_id: str,
|
||||
input_data: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute a workflow manually with optional input data.
|
||||
|
||||
Args:
|
||||
workflow_id: ID of the workflow to execute
|
||||
input_data: Optional input data for the execution
|
||||
|
||||
Returns:
|
||||
Execution result
|
||||
"""
|
||||
try:
|
||||
# Execute workflow
|
||||
if input_data:
|
||||
execution = self.client.execute_workflow(workflow_id, input_data)
|
||||
else:
|
||||
execution = self.client.execute_workflow(workflow_id)
|
||||
|
||||
logger.info(f"Manually executed workflow: {workflow_id}")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'execution': execution,
|
||||
'execution_id': execution.get('id'),
|
||||
'message': f"Successfully executed workflow {workflow_id}"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to execute workflow {workflow_id} manually: {e}"
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'success': False,
|
||||
'error': error_msg,
|
||||
'workflow_id': workflow_id
|
||||
}
|
||||
|
||||
def trigger_webhook_workflow(self, webhook_url: str,
|
||||
data: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Trigger a webhook-enabled workflow.
|
||||
|
||||
Args:
|
||||
webhook_url: URL of the webhook
|
||||
data: Data to send to webhook
|
||||
|
||||
Returns:
|
||||
Webhook response
|
||||
"""
|
||||
import requests
|
||||
|
||||
try:
|
||||
if data is None:
|
||||
data = {"test": True, "timestamp": "2024-01-01T00:00:00Z"}
|
||||
|
||||
response = requests.post(
|
||||
webhook_url,
|
||||
json=data,
|
||||
headers={'Content-Type': 'application/json'},
|
||||
timeout=30
|
||||
)
|
||||
|
||||
logger.info(f"Triggered webhook: {webhook_url}")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'status_code': response.status_code,
|
||||
'response_data': response.json() if response.headers.get('content-type', '').startswith('application/json') else response.text,
|
||||
'webhook_url': webhook_url,
|
||||
'sent_data': data
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to trigger webhook {webhook_url}: {e}"
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'success': False,
|
||||
'error': error_msg,
|
||||
'webhook_url': webhook_url
|
||||
}
|
||||
|
||||
def setup_test_workflow(self, workflow_id: str,
|
||||
test_data: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Set up a workflow for testing by ensuring it has manual trigger and is inactive.
|
||||
|
||||
Args:
|
||||
workflow_id: ID of the workflow to set up
|
||||
test_data: Optional test data to associate with the workflow
|
||||
|
||||
Returns:
|
||||
Setup result with testing instructions
|
||||
"""
|
||||
try:
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
workflow_name = workflow.get('name', 'Unknown')
|
||||
|
||||
# Ensure workflow is inactive
|
||||
if workflow.get('active', False):
|
||||
updated_workflow = {**workflow, 'active': False}
|
||||
workflow = self.client.update_workflow(workflow_id, updated_workflow)
|
||||
logger.info(f"Set workflow {workflow_name} to inactive for testing")
|
||||
|
||||
# Add manual trigger if not present
|
||||
trigger_result = self.add_manual_trigger_to_workflow(workflow_id, 'manual')
|
||||
|
||||
# Save test data if provided
|
||||
test_info = {}
|
||||
if test_data:
|
||||
from .mock_data_generator import MockDataGenerator
|
||||
data_gen = MockDataGenerator()
|
||||
test_file = data_gen.save_mock_data([test_data], f"test_data_{workflow_id}")
|
||||
test_info['test_data_file'] = test_file
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'workflow_name': workflow_name,
|
||||
'workflow_id': workflow_id,
|
||||
'is_inactive': True,
|
||||
'has_manual_trigger': True,
|
||||
'trigger_setup': trigger_result,
|
||||
'test_info': test_info,
|
||||
'testing_instructions': {
|
||||
'manual_execution': f"Use execute_workflow_manually('{workflow_id}') to test",
|
||||
'monitor_logs': "Use DockerLogMonitor to watch execution logs",
|
||||
'check_results': f"Use client.get_executions(workflow_id='{workflow_id}') to see results"
|
||||
}
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to set up workflow for testing {workflow_id}: {e}"
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'success': False,
|
||||
'error': error_msg,
|
||||
'workflow_id': workflow_id
|
||||
}
|
||||
|
||||
def _find_trigger_node(self, nodes: List[Dict[str, Any]],
|
||||
trigger_type: str) -> Optional[Dict[str, Any]]:
|
||||
"""Find a trigger node of specific type in the nodes list."""
|
||||
trigger_types = {
|
||||
'manual': 'n8n-nodes-base.manualTrigger',
|
||||
'webhook': 'n8n-nodes-base.webhook',
|
||||
'http': 'n8n-nodes-base.httpRequest'
|
||||
}
|
||||
|
||||
target_type = trigger_types.get(trigger_type)
|
||||
if not target_type:
|
||||
return None
|
||||
|
||||
for node in nodes:
|
||||
if node.get('type') == target_type:
|
||||
return node
|
||||
|
||||
return None
|
||||
|
||||
def _create_trigger_node(self, trigger_type: str, workflow_id: str) -> Dict[str, Any]:
|
||||
"""Create a trigger node of the specified type."""
|
||||
node_id = str(uuid.uuid4())
|
||||
|
||||
if trigger_type == 'manual':
|
||||
return {
|
||||
"id": node_id,
|
||||
"name": "Manual Trigger",
|
||||
"type": "n8n-nodes-base.manualTrigger",
|
||||
"typeVersion": 1,
|
||||
"position": [50, 100],
|
||||
"parameters": {}
|
||||
}
|
||||
|
||||
elif trigger_type == 'webhook':
|
||||
webhook_path = f"test-{workflow_id[:8]}-{str(uuid.uuid4())[:8]}"
|
||||
return {
|
||||
"id": node_id,
|
||||
"name": "Webhook Trigger",
|
||||
"type": "n8n-nodes-base.webhook",
|
||||
"typeVersion": 1,
|
||||
"position": [50, 100],
|
||||
"webhookId": str(uuid.uuid4()),
|
||||
"parameters": {
|
||||
"httpMethod": "POST",
|
||||
"path": webhook_path,
|
||||
"responseMode": "responseNode",
|
||||
"options": {
|
||||
"noResponseBody": False
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
elif trigger_type == 'http':
|
||||
return {
|
||||
"id": node_id,
|
||||
"name": "HTTP Request",
|
||||
"type": "n8n-nodes-base.httpRequest",
|
||||
"typeVersion": 4,
|
||||
"position": [50, 100],
|
||||
"parameters": {
|
||||
"method": "GET",
|
||||
"url": "https://httpbin.org/json",
|
||||
"options": {}
|
||||
}
|
||||
}
|
||||
|
||||
else:
|
||||
raise ValueError(f"Unsupported trigger type: {trigger_type}")
|
||||
|
||||
def _has_trigger_connections(self, connections: Dict[str, Any]) -> bool:
|
||||
"""Check if connections already include trigger nodes."""
|
||||
trigger_names = ['Manual Trigger', 'Webhook Trigger', 'HTTP Request']
|
||||
return any(name in connections for name in trigger_names)
|
||||
|
||||
def get_workflow_triggers(self, workflow_id: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get all trigger nodes in a workflow.
|
||||
|
||||
Args:
|
||||
workflow_id: ID of the workflow
|
||||
|
||||
Returns:
|
||||
List of trigger nodes
|
||||
"""
|
||||
try:
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
nodes = workflow.get('nodes', [])
|
||||
|
||||
trigger_types = [
|
||||
'n8n-nodes-base.manualTrigger',
|
||||
'n8n-nodes-base.webhook',
|
||||
'n8n-nodes-base.httpRequest',
|
||||
'n8n-nodes-base.cron',
|
||||
'n8n-nodes-base.interval'
|
||||
]
|
||||
|
||||
triggers = []
|
||||
for node in nodes:
|
||||
if node.get('type') in trigger_types:
|
||||
trigger_info = {
|
||||
'id': node.get('id'),
|
||||
'name': node.get('name'),
|
||||
'type': node.get('type'),
|
||||
'parameters': node.get('parameters', {}),
|
||||
'position': node.get('position', [0, 0])
|
||||
}
|
||||
|
||||
# Add webhook-specific info
|
||||
if node.get('type') == 'n8n-nodes-base.webhook':
|
||||
webhook_path = node.get('parameters', {}).get('path', '')
|
||||
webhook_url = f"{self.client.base_url.replace('/api/v1', '')}/webhook-test/{webhook_path}"
|
||||
trigger_info['webhook_url'] = webhook_url
|
||||
trigger_info['webhook_path'] = webhook_path
|
||||
|
||||
triggers.append(trigger_info)
|
||||
|
||||
return triggers
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get workflow triggers for {workflow_id}: {e}")
|
||||
return []
|
||||
|
||||
def create_manual_trigger_manager():
|
||||
"""Create a manual trigger manager instance."""
|
||||
return ManualTriggerManager()
|
||||
439
claude_n8n/tools/mock_api_server.py
Normal file
439
claude_n8n/tools/mock_api_server.py
Normal file
@@ -0,0 +1,439 @@
|
||||
"""
|
||||
Mock API Server for N8N Testing
|
||||
|
||||
This module provides a REST API server that serves data from text files.
|
||||
N8N workflows can call this API to get consistent test data stored in files.
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional, Union
|
||||
|
||||
from flask import Flask, jsonify, request, Response
|
||||
from werkzeug.serving import make_server
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MockAPIServer:
|
||||
"""REST API server that serves data from text files."""
|
||||
|
||||
def __init__(self, data_dir: str = "api_data", host: str = "0.0.0.0", port: int = 5000):
|
||||
"""
|
||||
Initialize the mock API server.
|
||||
|
||||
Args:
|
||||
data_dir: Directory containing data files
|
||||
host: Host to bind the server to
|
||||
port: Port to run the server on
|
||||
"""
|
||||
self.data_dir = Path(data_dir)
|
||||
self.data_dir.mkdir(exist_ok=True)
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.app = Flask(__name__)
|
||||
self.server = None
|
||||
self.server_thread = None
|
||||
self.is_running = False
|
||||
|
||||
# Setup routes
|
||||
self._setup_routes()
|
||||
|
||||
# Create example data file if it doesn't exist
|
||||
self._create_example_data()
|
||||
|
||||
def _setup_routes(self):
|
||||
"""Setup API routes."""
|
||||
|
||||
@self.app.route('/health', methods=['GET'])
|
||||
def health_check():
|
||||
"""Health check endpoint."""
|
||||
return jsonify({
|
||||
'status': 'healthy',
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'data_dir': str(self.data_dir),
|
||||
'available_endpoints': self._get_available_endpoints()
|
||||
})
|
||||
|
||||
@self.app.route('/data/<path:filename>', methods=['GET'])
|
||||
def get_data(filename):
|
||||
"""Get data from a specific file."""
|
||||
return self._serve_data_file(filename)
|
||||
|
||||
@self.app.route('/data', methods=['GET'])
|
||||
def list_data_files():
|
||||
"""List available data files."""
|
||||
files = []
|
||||
for file_path in self.data_dir.glob('*.json'):
|
||||
files.append({
|
||||
'name': file_path.stem,
|
||||
'filename': file_path.name,
|
||||
'size': file_path.stat().st_size,
|
||||
'modified': datetime.fromtimestamp(file_path.stat().st_mtime).isoformat(),
|
||||
'url': f"http://{self.host}:{self.port}/data/{file_path.name}"
|
||||
})
|
||||
|
||||
return jsonify({
|
||||
'files': files,
|
||||
'count': len(files)
|
||||
})
|
||||
|
||||
@self.app.route('/matrix', methods=['GET'])
|
||||
def get_matrix_data():
|
||||
"""Get Matrix chat data (example endpoint)."""
|
||||
return self._serve_data_file('matrix_messages.json')
|
||||
|
||||
@self.app.route('/random/<path:filename>', methods=['GET'])
|
||||
def get_random_data(filename):
|
||||
"""Get random item from data file."""
|
||||
data = self._load_data_file(filename)
|
||||
if not data:
|
||||
return jsonify({'error': 'File not found or empty'}), 404
|
||||
|
||||
if isinstance(data, list) and data:
|
||||
return jsonify(random.choice(data))
|
||||
else:
|
||||
return jsonify(data)
|
||||
|
||||
@self.app.route('/paginated/<path:filename>', methods=['GET'])
|
||||
def get_paginated_data(filename):
|
||||
"""Get paginated data from file."""
|
||||
page = int(request.args.get('page', 1))
|
||||
per_page = int(request.args.get('per_page', 10))
|
||||
|
||||
data = self._load_data_file(filename)
|
||||
if not data:
|
||||
return jsonify({'error': 'File not found or empty'}), 404
|
||||
|
||||
if isinstance(data, list):
|
||||
start_idx = (page - 1) * per_page
|
||||
end_idx = start_idx + per_page
|
||||
paginated_data = data[start_idx:end_idx]
|
||||
|
||||
return jsonify({
|
||||
'data': paginated_data,
|
||||
'page': page,
|
||||
'per_page': per_page,
|
||||
'total': len(data),
|
||||
'total_pages': (len(data) + per_page - 1) // per_page
|
||||
})
|
||||
else:
|
||||
return jsonify({
|
||||
'data': data,
|
||||
'page': 1,
|
||||
'per_page': 1,
|
||||
'total': 1,
|
||||
'total_pages': 1
|
||||
})
|
||||
|
||||
@self.app.route('/upload', methods=['POST'])
|
||||
def upload_data():
|
||||
"""Upload new data to a file."""
|
||||
try:
|
||||
filename = request.args.get('filename')
|
||||
if not filename:
|
||||
return jsonify({'error': 'filename parameter required'}), 400
|
||||
|
||||
data = request.get_json()
|
||||
if not data:
|
||||
return jsonify({'error': 'JSON data required'}), 400
|
||||
|
||||
filepath = self.data_dir / f"{filename}.json"
|
||||
with open(filepath, 'w') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
|
||||
return jsonify({
|
||||
'message': f'Data uploaded successfully to {filename}.json',
|
||||
'filename': f"{filename}.json",
|
||||
'size': filepath.stat().st_size
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
return jsonify({'error': str(e)}), 500
|
||||
|
||||
@self.app.errorhandler(404)
|
||||
def not_found(error):
|
||||
return jsonify({'error': 'Endpoint not found'}), 404
|
||||
|
||||
@self.app.errorhandler(500)
|
||||
def internal_error(error):
|
||||
return jsonify({'error': 'Internal server error'}), 500
|
||||
|
||||
def _serve_data_file(self, filename: str) -> Response:
|
||||
"""Serve data from a specific file."""
|
||||
# Add .json extension if not present
|
||||
if not filename.endswith('.json'):
|
||||
filename += '.json'
|
||||
|
||||
data = self._load_data_file(filename)
|
||||
if data is None:
|
||||
return jsonify({'error': f'File {filename} not found'}), 404
|
||||
|
||||
# Add some variation to timestamps if present
|
||||
varied_data = self._add_timestamp_variation(data)
|
||||
|
||||
return jsonify(varied_data)
|
||||
|
||||
def _load_data_file(self, filename: str) -> Optional[Union[Dict, List]]:
|
||||
"""Load data from a JSON file."""
|
||||
filepath = self.data_dir / filename
|
||||
|
||||
if not filepath.exists():
|
||||
logger.warning(f"Data file not found: {filepath}")
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(filepath, 'r', encoding='utf-8') as f:
|
||||
return json.load(f)
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading data file {filepath}: {e}")
|
||||
return None
|
||||
|
||||
def _add_timestamp_variation(self, data: Union[Dict, List]) -> Union[Dict, List]:
|
||||
"""Add slight variations to timestamps to simulate real data."""
|
||||
if isinstance(data, dict):
|
||||
return self._vary_dict_timestamps(data)
|
||||
elif isinstance(data, list):
|
||||
return [self._vary_dict_timestamps(item) if isinstance(item, dict) else item for item in data]
|
||||
else:
|
||||
return data
|
||||
|
||||
def _vary_dict_timestamps(self, data: Dict) -> Dict:
|
||||
"""Add variation to timestamps in a dictionary."""
|
||||
varied_data = data.copy()
|
||||
|
||||
# Common timestamp fields to vary
|
||||
timestamp_fields = ['timestamp', 'origin_server_ts', 'created_at', 'updated_at', 'time', 'date']
|
||||
|
||||
for key, value in varied_data.items():
|
||||
if key in timestamp_fields and isinstance(value, (int, float)):
|
||||
# Add random variation of ±5 minutes for timestamp fields
|
||||
variation = random.randint(-300000, 300000) # ±5 minutes in milliseconds
|
||||
varied_data[key] = value + variation
|
||||
elif key == 'age' and isinstance(value, (int, float)):
|
||||
# Add small random variation to age fields
|
||||
variation = random.randint(-1000, 1000)
|
||||
varied_data[key] = max(0, value + variation)
|
||||
elif isinstance(value, dict):
|
||||
varied_data[key] = self._vary_dict_timestamps(value)
|
||||
elif isinstance(value, list):
|
||||
varied_data[key] = [
|
||||
self._vary_dict_timestamps(item) if isinstance(item, dict) else item
|
||||
for item in value
|
||||
]
|
||||
|
||||
return varied_data
|
||||
|
||||
def _get_available_endpoints(self) -> List[str]:
|
||||
"""Get list of available API endpoints."""
|
||||
endpoints = [
|
||||
'/health',
|
||||
'/data',
|
||||
'/data/<filename>',
|
||||
'/random/<filename>',
|
||||
'/paginated/<filename>',
|
||||
'/upload',
|
||||
'/matrix'
|
||||
]
|
||||
|
||||
# Add endpoints for each data file
|
||||
for file_path in self.data_dir.glob('*.json'):
|
||||
endpoints.append(f'/data/{file_path.name}')
|
||||
|
||||
return endpoints
|
||||
|
||||
def _create_example_data(self):
|
||||
"""Create example data files if they don't exist."""
|
||||
# Create matrix messages example
|
||||
matrix_file = self.data_dir / 'matrix_messages.json'
|
||||
if not matrix_file.exists():
|
||||
matrix_example = [
|
||||
{
|
||||
"chunk": [
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@signal_37c02a2c-31a2-4937-88f2-3f6be48afcdc:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Viděli jsme dopravní nehodu",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1749927752871,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 668
|
||||
},
|
||||
"event_id": "$k2t8Uj8K9tdtXfuyvnxezL-Gijqb0Bw4rucgZ0rEOgA",
|
||||
"user_id": "@signal_37c02a2c-31a2-4937-88f2-3f6be48afcdc:matrix.klas.chat",
|
||||
"age": 668
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@signal_961d5a74-062f-4f22-88bd-e192a5e7d567:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "BTC fee je: 1 sat/vByte",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1749905152683,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 22623802
|
||||
},
|
||||
"event_id": "$WYbz0dB8f16PxL9_j0seJbO1tFaSGiiWeDaj8yGEfC8",
|
||||
"user_id": "@signal_961d5a74-062f-4f22-88bd-e192a5e7d567:matrix.klas.chat",
|
||||
"age": 22623802
|
||||
}
|
||||
],
|
||||
"start": "t404-16991_0_0_0_0_0_0_0_0_0",
|
||||
"end": "t395-16926_0_0_0_0_0_0_0_0_0"
|
||||
}
|
||||
]
|
||||
|
||||
with open(matrix_file, 'w', encoding='utf-8') as f:
|
||||
json.dump(matrix_example, f, indent=2, ensure_ascii=False)
|
||||
|
||||
logger.info(f"Created example matrix messages file: {matrix_file}")
|
||||
|
||||
# Create simple test data example
|
||||
test_file = self.data_dir / 'test_data.json'
|
||||
if not test_file.exists():
|
||||
test_data = {
|
||||
"message": "Hello from mock API",
|
||||
"timestamp": int(time.time() * 1000),
|
||||
"items": [
|
||||
{"id": 1, "name": "Item 1"},
|
||||
{"id": 2, "name": "Item 2"},
|
||||
{"id": 3, "name": "Item 3"}
|
||||
]
|
||||
}
|
||||
|
||||
with open(test_file, 'w') as f:
|
||||
json.dump(test_data, f, indent=2)
|
||||
|
||||
logger.info(f"Created example test data file: {test_file}")
|
||||
|
||||
def start(self, debug: bool = False, threaded: bool = True) -> bool:
|
||||
"""
|
||||
Start the API server.
|
||||
|
||||
Args:
|
||||
debug: Enable debug mode
|
||||
threaded: Run in a separate thread
|
||||
|
||||
Returns:
|
||||
True if server started successfully
|
||||
"""
|
||||
try:
|
||||
if self.is_running:
|
||||
logger.warning("Server is already running")
|
||||
return False
|
||||
|
||||
self.server = make_server(self.host, self.port, self.app, threaded=True)
|
||||
|
||||
if threaded:
|
||||
self.server_thread = threading.Thread(target=self.server.serve_forever, daemon=True)
|
||||
self.server_thread.start()
|
||||
else:
|
||||
self.server.serve_forever()
|
||||
|
||||
self.is_running = True
|
||||
logger.info(f"Mock API server started on http://{self.host}:{self.port}")
|
||||
logger.info(f"Health check: http://{self.host}:{self.port}/health")
|
||||
logger.info(f"Data files: http://{self.host}:{self.port}/data")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to start server: {e}")
|
||||
return False
|
||||
|
||||
def stop(self):
|
||||
"""Stop the API server."""
|
||||
if self.server and self.is_running:
|
||||
self.server.shutdown()
|
||||
if self.server_thread:
|
||||
self.server_thread.join(timeout=5)
|
||||
self.is_running = False
|
||||
logger.info("Mock API server stopped")
|
||||
else:
|
||||
logger.warning("Server is not running")
|
||||
|
||||
def add_data_file(self, filename: str, data: Union[Dict, List]) -> str:
|
||||
"""
|
||||
Add a new data file.
|
||||
|
||||
Args:
|
||||
filename: Name of the file (without .json extension)
|
||||
data: Data to store
|
||||
|
||||
Returns:
|
||||
Path to the created file
|
||||
"""
|
||||
if not filename.endswith('.json'):
|
||||
filename += '.json'
|
||||
|
||||
filepath = self.data_dir / filename
|
||||
|
||||
with open(filepath, 'w', encoding='utf-8') as f:
|
||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
|
||||
logger.info(f"Added data file: {filepath}")
|
||||
return str(filepath)
|
||||
|
||||
def get_server_info(self) -> Dict[str, Any]:
|
||||
"""Get information about the server."""
|
||||
return {
|
||||
'host': self.host,
|
||||
'port': self.port,
|
||||
'is_running': self.is_running,
|
||||
'data_dir': str(self.data_dir),
|
||||
'base_url': f"http://{self.host}:{self.port}",
|
||||
'health_url': f"http://{self.host}:{self.port}/health",
|
||||
'data_files_count': len(list(self.data_dir.glob('*.json'))),
|
||||
'available_endpoints': self._get_available_endpoints()
|
||||
}
|
||||
|
||||
def create_mock_api_server(data_dir: str = "api_data", host: str = "0.0.0.0", port: int = 5000):
|
||||
"""Create a mock API server instance."""
|
||||
return MockAPIServer(data_dir, host, port)
|
||||
|
||||
# CLI functionality for standalone usage
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Mock API Server for N8N Testing")
|
||||
parser.add_argument("--host", default="0.0.0.0", help="Host to bind to")
|
||||
parser.add_argument("--port", type=int, default=5000, help="Port to bind to")
|
||||
parser.add_argument("--data-dir", default="api_data", help="Directory for data files")
|
||||
parser.add_argument("--debug", action="store_true", help="Enable debug mode")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
|
||||
# Create and start server
|
||||
server = MockAPIServer(args.data_dir, args.host, args.port)
|
||||
|
||||
try:
|
||||
print(f"Starting Mock API Server on http://{args.host}:{args.port}")
|
||||
print(f"Data directory: {args.data_dir}")
|
||||
print("Press Ctrl+C to stop")
|
||||
|
||||
server.start(debug=args.debug, threaded=False)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\nShutting down server...")
|
||||
server.stop()
|
||||
print("Server stopped")
|
||||
24
claude_n8n/tools/mock_api_server/Dockerfile
Normal file
24
claude_n8n/tools/mock_api_server/Dockerfile
Normal file
@@ -0,0 +1,24 @@
|
||||
FROM python:3.9-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install dependencies
|
||||
RUN pip install flask werkzeug
|
||||
|
||||
# Create api_data directory
|
||||
RUN mkdir -p api_data
|
||||
|
||||
# Copy the mock API server script
|
||||
COPY mock_api_server.py .
|
||||
|
||||
# Copy API data files if they exist
|
||||
COPY api_data/ ./api_data/
|
||||
|
||||
# Expose port 5000
|
||||
EXPOSE 5000
|
||||
|
||||
# Set environment variables
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
|
||||
# Run the server
|
||||
CMD ["python", "mock_api_server.py", "--host", "0.0.0.0", "--port", "5000", "--data-dir", "api_data"]
|
||||
200
claude_n8n/tools/mock_api_server/README.md
Normal file
200
claude_n8n/tools/mock_api_server/README.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# Mock API Server for N8N Testing
|
||||
|
||||
A Docker-containerized REST API server that serves test data from JSON files for N8N workflow development and testing.
|
||||
|
||||
## Overview
|
||||
|
||||
This mock API server provides a consistent, controllable data source for testing N8N workflows without relying on external APIs. It serves data from JSON files and includes features like pagination, random data selection, and file upload capabilities.
|
||||
|
||||
## Files Structure
|
||||
|
||||
```
|
||||
mock_api_server/
|
||||
├── README.md # This documentation
|
||||
├── Dockerfile # Docker image definition
|
||||
├── docker-compose.yml # Docker Compose configuration
|
||||
├── mock_api_server.py # Main Python Flask server (symlinked from ../mock_api_server.py)
|
||||
└── api_data/ # JSON data files directory
|
||||
├── matrix_messages.json # Matrix chat messages sample data
|
||||
└── test_data.json # Simple test data
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Using Docker Compose (Recommended)
|
||||
|
||||
```bash
|
||||
cd /home/klas/claude_n8n/tools/mock_api_server
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
The server will be available at: `http://localhost:5002`
|
||||
|
||||
### Using Docker Build
|
||||
|
||||
```bash
|
||||
cd /home/klas/claude_n8n/tools/mock_api_server
|
||||
docker build -t mock-api-server .
|
||||
docker run -d -p 5002:5000 -v $(pwd)/api_data:/app/api_data mock-api-server
|
||||
```
|
||||
|
||||
### Using Python Directly
|
||||
|
||||
```bash
|
||||
cd /home/klas/claude_n8n/tools
|
||||
python mock_api_server.py --host 0.0.0.0 --port 5002 --data-dir mock_api_server/api_data
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Health Check
|
||||
- **GET** `/health` - Server status and available endpoints
|
||||
|
||||
### Data Access
|
||||
- **GET** `/data` - List all available data files
|
||||
- **GET** `/data/<filename>` - Get data from specific file (`.json` extension optional)
|
||||
- **GET** `/random/<filename>` - Get random item from array data
|
||||
- **GET** `/paginated/<filename>?page=1&per_page=10` - Get paginated data
|
||||
|
||||
### Special Endpoints
|
||||
- **GET** `/matrix` - Alias for `/data/matrix_messages.json`
|
||||
- **POST** `/upload?filename=<name>` - Upload new JSON data file
|
||||
|
||||
### Query Parameters
|
||||
- `page` - Page number for pagination (default: 1)
|
||||
- `per_page` - Items per page (default: 10)
|
||||
- `filename` - Target filename for upload (without .json extension)
|
||||
|
||||
## Example Usage
|
||||
|
||||
### Health Check
|
||||
```bash
|
||||
curl http://localhost:5002/health
|
||||
```
|
||||
|
||||
### Get Test Data
|
||||
```bash
|
||||
curl http://localhost:5002/data/test_data.json
|
||||
# Returns: {"message": "Hello from mock API", "timestamp": 1234567890, "items": [...]}
|
||||
```
|
||||
|
||||
### Get Random Item
|
||||
```bash
|
||||
curl http://localhost:5002/random/test_data
|
||||
# Returns random item from the test_data.json array
|
||||
```
|
||||
|
||||
### Paginated Data
|
||||
```bash
|
||||
curl "http://localhost:5002/paginated/matrix_messages?page=1&per_page=5"
|
||||
```
|
||||
|
||||
### Upload New Data
|
||||
```bash
|
||||
curl -X POST "http://localhost:5002/upload?filename=my_data" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"test": "value", "items": [1,2,3]}'
|
||||
```
|
||||
|
||||
## Data Files
|
||||
|
||||
### Adding New Data Files
|
||||
|
||||
1. **Via File System:** Add `.json` files to the `api_data/` directory
|
||||
2. **Via API:** Use the `/upload` endpoint to create new files
|
||||
3. **Via Container:** Mount additional volumes or copy files into running container
|
||||
|
||||
### Data File Format
|
||||
|
||||
Files should contain valid JSON. The server supports:
|
||||
- **Objects:** `{"key": "value", "items": [...]}`
|
||||
- **Arrays:** `[{"id": 1}, {"id": 2}]`
|
||||
|
||||
### Sample Data Files
|
||||
|
||||
#### test_data.json
|
||||
```json
|
||||
{
|
||||
"message": "Hello from mock API",
|
||||
"timestamp": 1234567890,
|
||||
"items": [
|
||||
{"id": 1, "name": "Item 1"},
|
||||
{"id": 2, "name": "Item 2"},
|
||||
{"id": 3, "name": "Item 3"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### matrix_messages.json
|
||||
Contains sample Matrix chat room messages with realistic structure for testing chat integrations.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
- `PYTHONUNBUFFERED=1` - Enable real-time Python output in Docker
|
||||
|
||||
### Docker Compose Configuration
|
||||
- **Host Port:** 5002
|
||||
- **Container Port:** 5000
|
||||
- **Volume Mount:** `./api_data:/app/api_data`
|
||||
- **Restart Policy:** `unless-stopped`
|
||||
|
||||
### Health Check
|
||||
Docker includes automatic health checking via curl to `/health` endpoint.
|
||||
|
||||
## Integration with N8N
|
||||
|
||||
### HTTP Request Node Configuration
|
||||
```
|
||||
Method: GET
|
||||
URL: http://host.docker.internal:5002/data/test_data
|
||||
```
|
||||
|
||||
### Webhook Testing
|
||||
Use the mock API to provide consistent test data for webhook development and testing.
|
||||
|
||||
### Data Processing Workflows
|
||||
Test data transformation nodes with predictable input from the mock API.
|
||||
|
||||
## Development
|
||||
|
||||
### Adding New Endpoints
|
||||
Edit `mock_api_server.py` and add new Flask routes. The server will automatically restart in development mode.
|
||||
|
||||
### Debugging
|
||||
Check container logs:
|
||||
```bash
|
||||
docker compose logs -f mock-api
|
||||
```
|
||||
|
||||
### Stopping the Server
|
||||
```bash
|
||||
docker compose down
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Port Already in Use
|
||||
If port 5002 is occupied, edit `docker-compose.yml` and change the host port:
|
||||
```yaml
|
||||
ports:
|
||||
- "5003:5000" # Change 5002 to 5003
|
||||
```
|
||||
|
||||
### File Permissions
|
||||
Ensure the `api_data` directory is writable for file uploads:
|
||||
```bash
|
||||
chmod 755 api_data/
|
||||
```
|
||||
|
||||
### Container Not Starting
|
||||
Check if all required files are present:
|
||||
```bash
|
||||
ls -la mock_api_server.py Dockerfile docker-compose.yml api_data/
|
||||
```
|
||||
|
||||
## Related Files
|
||||
|
||||
- **Main Server Script:** `/home/klas/claude_n8n/tools/mock_api_server.py`
|
||||
- **N8N Tools Directory:** `/home/klas/claude_n8n/tools/`
|
||||
- **Original Development Files:** `/home/klas/mem0/.claude/` (can be removed after migration)
|
||||
@@ -0,0 +1,98 @@
|
||||
[
|
||||
{
|
||||
"chunk": [
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@klas:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "The hybrid deduplication system is now working perfectly. We've successfully implemented content-based analysis that eliminates dependency on N8N workflow variables.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017000000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 1000
|
||||
},
|
||||
"event_id": "$memory_1_recent_implementation_success",
|
||||
"user_id": "@klas:matrix.klas.chat",
|
||||
"age": 1000
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@developer:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Key improvements include age-based filtering (30+ minutes), system message detection, and enhanced duplicate detection using content fingerprinting. The solution addresses the core issue where 10-message chunks were being reprocessed.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017060000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 2000
|
||||
},
|
||||
"event_id": "$memory_2_technical_details",
|
||||
"user_id": "@developer:matrix.klas.chat",
|
||||
"age": 2000
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@ai_assistant:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Memory retention has been significantly improved. The false duplicate detection that was causing 0.2-minute memory lifespans has been resolved through sophisticated content analysis and multiple validation layers.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017120000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 3000
|
||||
},
|
||||
"event_id": "$memory_3_retention_improvement",
|
||||
"user_id": "@ai_assistant:matrix.klas.chat",
|
||||
"age": 3000
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@system_monitor:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Test results: 2/2 scenarios passed. Valid recent messages are processed correctly, while old messages (1106+ minutes) are properly filtered. The enhanced deduplication is fully operational with robust duplicate detection.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017180000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 4000
|
||||
},
|
||||
"event_id": "$memory_4_test_results",
|
||||
"user_id": "@system_monitor:matrix.klas.chat",
|
||||
"age": 4000
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@project_lead:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Next phase: Monitor memory creation and consolidation patterns. The hybrid solution combines deterministic deduplication with AI-driven memory management for optimal performance and accuracy.",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1750017240000,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 5000
|
||||
},
|
||||
"event_id": "$memory_5_next_phase",
|
||||
"user_id": "@project_lead:matrix.klas.chat",
|
||||
"age": 5000
|
||||
}
|
||||
],
|
||||
"start": "t500-17000_0_0_0_0_0_0_0_0_0",
|
||||
"end": "t505-17005_0_0_0_0_0_0_0_0_0"
|
||||
}
|
||||
]
|
||||
18
claude_n8n/tools/mock_api_server/api_data/test_data.json
Normal file
18
claude_n8n/tools/mock_api_server/api_data/test_data.json
Normal file
@@ -0,0 +1,18 @@
|
||||
{
|
||||
"message": "Hello from mock API",
|
||||
"timestamp": 1749928362092,
|
||||
"items": [
|
||||
{
|
||||
"id": 1,
|
||||
"name": "Item 1"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"name": "Item 2"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"name": "Item 3"
|
||||
}
|
||||
]
|
||||
}
|
||||
21
claude_n8n/tools/mock_api_server/docker-compose.yml
Normal file
21
claude_n8n/tools/mock_api_server/docker-compose.yml
Normal file
@@ -0,0 +1,21 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
mock-api:
|
||||
build: .
|
||||
ports:
|
||||
- "5002:5000"
|
||||
volumes:
|
||||
- ./api_data:/app/api_data
|
||||
environment:
|
||||
- PYTHONUNBUFFERED=1
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
|
||||
volumes:
|
||||
api_data:
|
||||
439
claude_n8n/tools/mock_api_server/mock_api_server.py
Normal file
439
claude_n8n/tools/mock_api_server/mock_api_server.py
Normal file
@@ -0,0 +1,439 @@
|
||||
"""
|
||||
Mock API Server for N8N Testing
|
||||
|
||||
This module provides a REST API server that serves data from text files.
|
||||
N8N workflows can call this API to get consistent test data stored in files.
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional, Union
|
||||
|
||||
from flask import Flask, jsonify, request, Response
|
||||
from werkzeug.serving import make_server
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MockAPIServer:
|
||||
"""REST API server that serves data from text files."""
|
||||
|
||||
def __init__(self, data_dir: str = "api_data", host: str = "0.0.0.0", port: int = 5000):
|
||||
"""
|
||||
Initialize the mock API server.
|
||||
|
||||
Args:
|
||||
data_dir: Directory containing data files
|
||||
host: Host to bind the server to
|
||||
port: Port to run the server on
|
||||
"""
|
||||
self.data_dir = Path(data_dir)
|
||||
self.data_dir.mkdir(exist_ok=True)
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.app = Flask(__name__)
|
||||
self.server = None
|
||||
self.server_thread = None
|
||||
self.is_running = False
|
||||
|
||||
# Setup routes
|
||||
self._setup_routes()
|
||||
|
||||
# Create example data file if it doesn't exist
|
||||
self._create_example_data()
|
||||
|
||||
def _setup_routes(self):
|
||||
"""Setup API routes."""
|
||||
|
||||
@self.app.route('/health', methods=['GET'])
|
||||
def health_check():
|
||||
"""Health check endpoint."""
|
||||
return jsonify({
|
||||
'status': 'healthy',
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'data_dir': str(self.data_dir),
|
||||
'available_endpoints': self._get_available_endpoints()
|
||||
})
|
||||
|
||||
@self.app.route('/data/<path:filename>', methods=['GET'])
|
||||
def get_data(filename):
|
||||
"""Get data from a specific file."""
|
||||
return self._serve_data_file(filename)
|
||||
|
||||
@self.app.route('/data', methods=['GET'])
|
||||
def list_data_files():
|
||||
"""List available data files."""
|
||||
files = []
|
||||
for file_path in self.data_dir.glob('*.json'):
|
||||
files.append({
|
||||
'name': file_path.stem,
|
||||
'filename': file_path.name,
|
||||
'size': file_path.stat().st_size,
|
||||
'modified': datetime.fromtimestamp(file_path.stat().st_mtime).isoformat(),
|
||||
'url': f"http://{self.host}:{self.port}/data/{file_path.name}"
|
||||
})
|
||||
|
||||
return jsonify({
|
||||
'files': files,
|
||||
'count': len(files)
|
||||
})
|
||||
|
||||
@self.app.route('/matrix', methods=['GET'])
|
||||
def get_matrix_data():
|
||||
"""Get Matrix chat data (example endpoint)."""
|
||||
return self._serve_data_file('matrix_messages.json')
|
||||
|
||||
@self.app.route('/random/<path:filename>', methods=['GET'])
|
||||
def get_random_data(filename):
|
||||
"""Get random item from data file."""
|
||||
data = self._load_data_file(filename)
|
||||
if not data:
|
||||
return jsonify({'error': 'File not found or empty'}), 404
|
||||
|
||||
if isinstance(data, list) and data:
|
||||
return jsonify(random.choice(data))
|
||||
else:
|
||||
return jsonify(data)
|
||||
|
||||
@self.app.route('/paginated/<path:filename>', methods=['GET'])
|
||||
def get_paginated_data(filename):
|
||||
"""Get paginated data from file."""
|
||||
page = int(request.args.get('page', 1))
|
||||
per_page = int(request.args.get('per_page', 10))
|
||||
|
||||
data = self._load_data_file(filename)
|
||||
if not data:
|
||||
return jsonify({'error': 'File not found or empty'}), 404
|
||||
|
||||
if isinstance(data, list):
|
||||
start_idx = (page - 1) * per_page
|
||||
end_idx = start_idx + per_page
|
||||
paginated_data = data[start_idx:end_idx]
|
||||
|
||||
return jsonify({
|
||||
'data': paginated_data,
|
||||
'page': page,
|
||||
'per_page': per_page,
|
||||
'total': len(data),
|
||||
'total_pages': (len(data) + per_page - 1) // per_page
|
||||
})
|
||||
else:
|
||||
return jsonify({
|
||||
'data': data,
|
||||
'page': 1,
|
||||
'per_page': 1,
|
||||
'total': 1,
|
||||
'total_pages': 1
|
||||
})
|
||||
|
||||
@self.app.route('/upload', methods=['POST'])
|
||||
def upload_data():
|
||||
"""Upload new data to a file."""
|
||||
try:
|
||||
filename = request.args.get('filename')
|
||||
if not filename:
|
||||
return jsonify({'error': 'filename parameter required'}), 400
|
||||
|
||||
data = request.get_json()
|
||||
if not data:
|
||||
return jsonify({'error': 'JSON data required'}), 400
|
||||
|
||||
filepath = self.data_dir / f"{filename}.json"
|
||||
with open(filepath, 'w') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
|
||||
return jsonify({
|
||||
'message': f'Data uploaded successfully to {filename}.json',
|
||||
'filename': f"{filename}.json",
|
||||
'size': filepath.stat().st_size
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
return jsonify({'error': str(e)}), 500
|
||||
|
||||
@self.app.errorhandler(404)
|
||||
def not_found(error):
|
||||
return jsonify({'error': 'Endpoint not found'}), 404
|
||||
|
||||
@self.app.errorhandler(500)
|
||||
def internal_error(error):
|
||||
return jsonify({'error': 'Internal server error'}), 500
|
||||
|
||||
def _serve_data_file(self, filename: str) -> Response:
|
||||
"""Serve data from a specific file."""
|
||||
# Add .json extension if not present
|
||||
if not filename.endswith('.json'):
|
||||
filename += '.json'
|
||||
|
||||
data = self._load_data_file(filename)
|
||||
if data is None:
|
||||
return jsonify({'error': f'File {filename} not found'}), 404
|
||||
|
||||
# Add some variation to timestamps if present
|
||||
varied_data = self._add_timestamp_variation(data)
|
||||
|
||||
return jsonify(varied_data)
|
||||
|
||||
def _load_data_file(self, filename: str) -> Optional[Union[Dict, List]]:
|
||||
"""Load data from a JSON file."""
|
||||
filepath = self.data_dir / filename
|
||||
|
||||
if not filepath.exists():
|
||||
logger.warning(f"Data file not found: {filepath}")
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(filepath, 'r', encoding='utf-8') as f:
|
||||
return json.load(f)
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading data file {filepath}: {e}")
|
||||
return None
|
||||
|
||||
def _add_timestamp_variation(self, data: Union[Dict, List]) -> Union[Dict, List]:
|
||||
"""Add slight variations to timestamps to simulate real data."""
|
||||
if isinstance(data, dict):
|
||||
return self._vary_dict_timestamps(data)
|
||||
elif isinstance(data, list):
|
||||
return [self._vary_dict_timestamps(item) if isinstance(item, dict) else item for item in data]
|
||||
else:
|
||||
return data
|
||||
|
||||
def _vary_dict_timestamps(self, data: Dict) -> Dict:
|
||||
"""Add variation to timestamps in a dictionary."""
|
||||
varied_data = data.copy()
|
||||
|
||||
# Common timestamp fields to vary
|
||||
timestamp_fields = ['timestamp', 'origin_server_ts', 'created_at', 'updated_at', 'time', 'date']
|
||||
|
||||
for key, value in varied_data.items():
|
||||
if key in timestamp_fields and isinstance(value, (int, float)):
|
||||
# Add random variation of ±5 minutes for timestamp fields
|
||||
variation = random.randint(-300000, 300000) # ±5 minutes in milliseconds
|
||||
varied_data[key] = value + variation
|
||||
elif key == 'age' and isinstance(value, (int, float)):
|
||||
# Add small random variation to age fields
|
||||
variation = random.randint(-1000, 1000)
|
||||
varied_data[key] = max(0, value + variation)
|
||||
elif isinstance(value, dict):
|
||||
varied_data[key] = self._vary_dict_timestamps(value)
|
||||
elif isinstance(value, list):
|
||||
varied_data[key] = [
|
||||
self._vary_dict_timestamps(item) if isinstance(item, dict) else item
|
||||
for item in value
|
||||
]
|
||||
|
||||
return varied_data
|
||||
|
||||
def _get_available_endpoints(self) -> List[str]:
|
||||
"""Get list of available API endpoints."""
|
||||
endpoints = [
|
||||
'/health',
|
||||
'/data',
|
||||
'/data/<filename>',
|
||||
'/random/<filename>',
|
||||
'/paginated/<filename>',
|
||||
'/upload',
|
||||
'/matrix'
|
||||
]
|
||||
|
||||
# Add endpoints for each data file
|
||||
for file_path in self.data_dir.glob('*.json'):
|
||||
endpoints.append(f'/data/{file_path.name}')
|
||||
|
||||
return endpoints
|
||||
|
||||
def _create_example_data(self):
|
||||
"""Create example data files if they don't exist."""
|
||||
# Create matrix messages example
|
||||
matrix_file = self.data_dir / 'matrix_messages.json'
|
||||
if not matrix_file.exists():
|
||||
matrix_example = [
|
||||
{
|
||||
"chunk": [
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@signal_37c02a2c-31a2-4937-88f2-3f6be48afcdc:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "Viděli jsme dopravní nehodu",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1749927752871,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 668
|
||||
},
|
||||
"event_id": "$k2t8Uj8K9tdtXfuyvnxezL-Gijqb0Bw4rucgZ0rEOgA",
|
||||
"user_id": "@signal_37c02a2c-31a2-4937-88f2-3f6be48afcdc:matrix.klas.chat",
|
||||
"age": 668
|
||||
},
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"room_id": "!xZkScMybPseErYMJDz:matrix.klas.chat",
|
||||
"sender": "@signal_961d5a74-062f-4f22-88bd-e192a5e7d567:matrix.klas.chat",
|
||||
"content": {
|
||||
"body": "BTC fee je: 1 sat/vByte",
|
||||
"m.mentions": {},
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"origin_server_ts": 1749905152683,
|
||||
"unsigned": {
|
||||
"membership": "join",
|
||||
"age": 22623802
|
||||
},
|
||||
"event_id": "$WYbz0dB8f16PxL9_j0seJbO1tFaSGiiWeDaj8yGEfC8",
|
||||
"user_id": "@signal_961d5a74-062f-4f22-88bd-e192a5e7d567:matrix.klas.chat",
|
||||
"age": 22623802
|
||||
}
|
||||
],
|
||||
"start": "t404-16991_0_0_0_0_0_0_0_0_0",
|
||||
"end": "t395-16926_0_0_0_0_0_0_0_0_0"
|
||||
}
|
||||
]
|
||||
|
||||
with open(matrix_file, 'w', encoding='utf-8') as f:
|
||||
json.dump(matrix_example, f, indent=2, ensure_ascii=False)
|
||||
|
||||
logger.info(f"Created example matrix messages file: {matrix_file}")
|
||||
|
||||
# Create simple test data example
|
||||
test_file = self.data_dir / 'test_data.json'
|
||||
if not test_file.exists():
|
||||
test_data = {
|
||||
"message": "Hello from mock API",
|
||||
"timestamp": int(time.time() * 1000),
|
||||
"items": [
|
||||
{"id": 1, "name": "Item 1"},
|
||||
{"id": 2, "name": "Item 2"},
|
||||
{"id": 3, "name": "Item 3"}
|
||||
]
|
||||
}
|
||||
|
||||
with open(test_file, 'w') as f:
|
||||
json.dump(test_data, f, indent=2)
|
||||
|
||||
logger.info(f"Created example test data file: {test_file}")
|
||||
|
||||
def start(self, debug: bool = False, threaded: bool = True) -> bool:
|
||||
"""
|
||||
Start the API server.
|
||||
|
||||
Args:
|
||||
debug: Enable debug mode
|
||||
threaded: Run in a separate thread
|
||||
|
||||
Returns:
|
||||
True if server started successfully
|
||||
"""
|
||||
try:
|
||||
if self.is_running:
|
||||
logger.warning("Server is already running")
|
||||
return False
|
||||
|
||||
self.server = make_server(self.host, self.port, self.app, threaded=True)
|
||||
|
||||
if threaded:
|
||||
self.server_thread = threading.Thread(target=self.server.serve_forever, daemon=True)
|
||||
self.server_thread.start()
|
||||
else:
|
||||
self.server.serve_forever()
|
||||
|
||||
self.is_running = True
|
||||
logger.info(f"Mock API server started on http://{self.host}:{self.port}")
|
||||
logger.info(f"Health check: http://{self.host}:{self.port}/health")
|
||||
logger.info(f"Data files: http://{self.host}:{self.port}/data")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to start server: {e}")
|
||||
return False
|
||||
|
||||
def stop(self):
|
||||
"""Stop the API server."""
|
||||
if self.server and self.is_running:
|
||||
self.server.shutdown()
|
||||
if self.server_thread:
|
||||
self.server_thread.join(timeout=5)
|
||||
self.is_running = False
|
||||
logger.info("Mock API server stopped")
|
||||
else:
|
||||
logger.warning("Server is not running")
|
||||
|
||||
def add_data_file(self, filename: str, data: Union[Dict, List]) -> str:
|
||||
"""
|
||||
Add a new data file.
|
||||
|
||||
Args:
|
||||
filename: Name of the file (without .json extension)
|
||||
data: Data to store
|
||||
|
||||
Returns:
|
||||
Path to the created file
|
||||
"""
|
||||
if not filename.endswith('.json'):
|
||||
filename += '.json'
|
||||
|
||||
filepath = self.data_dir / filename
|
||||
|
||||
with open(filepath, 'w', encoding='utf-8') as f:
|
||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
|
||||
logger.info(f"Added data file: {filepath}")
|
||||
return str(filepath)
|
||||
|
||||
def get_server_info(self) -> Dict[str, Any]:
|
||||
"""Get information about the server."""
|
||||
return {
|
||||
'host': self.host,
|
||||
'port': self.port,
|
||||
'is_running': self.is_running,
|
||||
'data_dir': str(self.data_dir),
|
||||
'base_url': f"http://{self.host}:{self.port}",
|
||||
'health_url': f"http://{self.host}:{self.port}/health",
|
||||
'data_files_count': len(list(self.data_dir.glob('*.json'))),
|
||||
'available_endpoints': self._get_available_endpoints()
|
||||
}
|
||||
|
||||
def create_mock_api_server(data_dir: str = "api_data", host: str = "0.0.0.0", port: int = 5000):
|
||||
"""Create a mock API server instance."""
|
||||
return MockAPIServer(data_dir, host, port)
|
||||
|
||||
# CLI functionality for standalone usage
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Mock API Server for N8N Testing")
|
||||
parser.add_argument("--host", default="0.0.0.0", help="Host to bind to")
|
||||
parser.add_argument("--port", type=int, default=5000, help="Port to bind to")
|
||||
parser.add_argument("--data-dir", default="api_data", help="Directory for data files")
|
||||
parser.add_argument("--debug", action="store_true", help="Enable debug mode")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
|
||||
# Create and start server
|
||||
server = MockAPIServer(args.data_dir, args.host, args.port)
|
||||
|
||||
try:
|
||||
print(f"Starting Mock API Server on http://{args.host}:{args.port}")
|
||||
print(f"Data directory: {args.data_dir}")
|
||||
print("Press Ctrl+C to stop")
|
||||
|
||||
server.start(debug=args.debug, threaded=False)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\nShutting down server...")
|
||||
server.stop()
|
||||
print("Server stopped")
|
||||
430
claude_n8n/tools/n8n_assistant.py
Executable file
430
claude_n8n/tools/n8n_assistant.py
Executable file
@@ -0,0 +1,430 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
N8N Assistant - Main orchestration script that provides a complete interface
|
||||
for N8N workflow development, testing, and improvement
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import argparse
|
||||
from typing import Dict, List, Optional, Any
|
||||
from datetime import datetime
|
||||
|
||||
# Add tools directory to path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from n8n_client import N8NClient
|
||||
from workflow_analyzer import WorkflowAnalyzer
|
||||
from execution_monitor import ExecutionMonitor, create_simple_monitor
|
||||
from workflow_improver import WorkflowImprover, TestCase
|
||||
|
||||
|
||||
class N8NAssistant:
|
||||
"""Main assistant class that orchestrates all N8N workflow operations"""
|
||||
|
||||
def __init__(self, config_path: str = "n8n_api_credentials.json"):
|
||||
"""Initialize N8N Assistant with all tools"""
|
||||
print("🚀 Initializing N8N Assistant...")
|
||||
|
||||
try:
|
||||
self.client = N8NClient(config_path)
|
||||
self.analyzer = WorkflowAnalyzer()
|
||||
self.monitor = create_simple_monitor(self.client)
|
||||
self.improver = WorkflowImprover(self.client, self.analyzer, self.monitor)
|
||||
|
||||
print("✅ N8N Assistant initialized successfully!")
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to initialize N8N Assistant: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
def list_workflows(self) -> List[Dict]:
|
||||
"""List all workflows with basic information"""
|
||||
try:
|
||||
workflows = self.client.list_workflows()
|
||||
|
||||
print(f"\n📋 Found {len(workflows)} workflows:")
|
||||
print("-" * 80)
|
||||
print(f"{'ID':<20} {'Name':<30} {'Active':<8} {'Created'}")
|
||||
print("-" * 80)
|
||||
|
||||
for workflow in workflows:
|
||||
workflow_id = workflow.get('id', 'N/A')[:18]
|
||||
name = workflow.get('name', 'Unnamed')[:28]
|
||||
active = "Yes" if workflow.get('active') else "No"
|
||||
created = workflow.get('createdAt', 'N/A')[:10]
|
||||
|
||||
print(f"{workflow_id:<20} {name:<30} {active:<8} {created}")
|
||||
|
||||
return workflows
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error listing workflows: {e}")
|
||||
return []
|
||||
|
||||
def analyze_workflow(self, workflow_id: str, include_executions: bool = True) -> Dict:
|
||||
"""Perform comprehensive workflow analysis"""
|
||||
try:
|
||||
print(f"🔍 Analyzing workflow {workflow_id}...")
|
||||
|
||||
# Get workflow details
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
print(f"📊 Workflow: {workflow.get('name', 'Unnamed')}")
|
||||
|
||||
# Get recent executions if requested
|
||||
executions = []
|
||||
if include_executions:
|
||||
executions = self.client.get_executions(workflow_id, limit=20)
|
||||
print(f"📈 Analyzing {len(executions)} recent executions")
|
||||
|
||||
# Generate comprehensive health report
|
||||
health_report = self.analyzer.generate_health_report(workflow, executions)
|
||||
|
||||
# Display results
|
||||
self._display_analysis_results(health_report)
|
||||
|
||||
return {
|
||||
'workflow': workflow,
|
||||
'executions': executions,
|
||||
'health_report': health_report
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error analyzing workflow: {e}")
|
||||
return {}
|
||||
|
||||
def test_workflow(self, workflow_id: str, test_data: Optional[Dict] = None,
|
||||
create_test_suite: bool = False) -> Dict:
|
||||
"""Test workflow with provided data or generated test suite"""
|
||||
try:
|
||||
print(f"🧪 Testing workflow {workflow_id}...")
|
||||
|
||||
if create_test_suite:
|
||||
# Create comprehensive test suite
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
test_cases = self.improver.create_test_suite(workflow, [test_data] if test_data else [])
|
||||
|
||||
print(f"📝 Created {len(test_cases)} test cases")
|
||||
test_results = self.improver.run_test_suite(workflow_id, test_cases)
|
||||
else:
|
||||
# Single test execution
|
||||
print("🚀 Executing workflow with test data...")
|
||||
execution_event = self.monitor.execute_and_monitor(workflow_id, test_data)
|
||||
|
||||
test_results = [{
|
||||
'test_name': 'single_execution',
|
||||
'status': execution_event.status.value,
|
||||
'duration': execution_event.duration,
|
||||
'success': execution_event.status.value == 'success',
|
||||
'execution_id': execution_event.execution_id,
|
||||
'error_message': execution_event.error_message
|
||||
}]
|
||||
|
||||
# Display test results
|
||||
self._display_test_results(test_results)
|
||||
|
||||
return {
|
||||
'test_results': test_results,
|
||||
'success_rate': len([r for r in test_results if r.get('passed', r.get('success'))]) / len(test_results) * 100
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing workflow: {e}")
|
||||
return {'test_results': [], 'success_rate': 0}
|
||||
|
||||
def improve_workflow(self, workflow_id: str, max_iterations: int = 3) -> Dict:
|
||||
"""Perform iterative workflow improvement"""
|
||||
try:
|
||||
print(f"🔧 Starting iterative improvement for workflow {workflow_id}...")
|
||||
print(f"📊 Maximum iterations: {max_iterations}")
|
||||
|
||||
# Get workflow and create test suite
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
test_cases = self.improver.create_test_suite(workflow)
|
||||
|
||||
# Perform iterative improvement
|
||||
improvement_results = self.improver.iterative_improvement(
|
||||
workflow_id, test_cases, max_iterations
|
||||
)
|
||||
|
||||
# Display improvement results
|
||||
self._display_improvement_results(improvement_results)
|
||||
|
||||
return {
|
||||
'improvement_results': improvement_results,
|
||||
'total_iterations': len(improvement_results),
|
||||
'final_success': improvement_results[-1].success if improvement_results else False
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error improving workflow: {e}")
|
||||
return {'improvement_results': [], 'total_iterations': 0, 'final_success': False}
|
||||
|
||||
def monitor_workflow(self, workflow_id: str, duration_minutes: int = 60):
|
||||
"""Monitor workflow executions for specified duration"""
|
||||
try:
|
||||
print(f"👁️ Starting monitoring for workflow {workflow_id}")
|
||||
print(f"⏱️ Duration: {duration_minutes} minutes")
|
||||
print("📊 Monitoring started (Ctrl+C to stop)...")
|
||||
|
||||
# Start monitoring
|
||||
self.monitor.start_monitoring([workflow_id])
|
||||
|
||||
# Keep monitoring for specified duration
|
||||
import time
|
||||
time.sleep(duration_minutes * 60)
|
||||
|
||||
# Stop monitoring
|
||||
self.monitor.stop_monitoring()
|
||||
|
||||
# Get execution summary
|
||||
summary = self.monitor.get_execution_summary(hours=duration_minutes/60)
|
||||
self._display_execution_summary(summary)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n⏹️ Monitoring stopped by user")
|
||||
self.monitor.stop_monitoring()
|
||||
except Exception as e:
|
||||
print(f"❌ Error monitoring workflow: {e}")
|
||||
|
||||
def get_workflow_health(self, workflow_id: str) -> Dict:
|
||||
"""Get comprehensive workflow health information"""
|
||||
try:
|
||||
print(f"🏥 Getting health information for workflow {workflow_id}...")
|
||||
|
||||
# Get workflow health statistics
|
||||
health_stats = self.client.get_workflow_health(workflow_id)
|
||||
|
||||
# Get recent executions for detailed analysis
|
||||
executions = self.client.get_executions(workflow_id, limit=10)
|
||||
|
||||
# Analyze error patterns if there are failures
|
||||
error_patterns = []
|
||||
if health_stats['error_count'] > 0:
|
||||
error_patterns = self.analyzer.find_error_patterns(executions)
|
||||
|
||||
print(f"📊 Health Statistics:")
|
||||
print(f" Total Executions (7 days): {health_stats['total_executions']}")
|
||||
print(f" Success Rate: {health_stats['success_rate']:.1f}%")
|
||||
print(f" Error Count: {health_stats['error_count']}")
|
||||
|
||||
if error_patterns:
|
||||
print(f"\n🚨 Error Patterns Found:")
|
||||
for pattern in error_patterns[:3]: # Show top 3 patterns
|
||||
print(f" • {pattern['pattern']}: {pattern['frequency']} occurrences")
|
||||
|
||||
return {
|
||||
'health_stats': health_stats,
|
||||
'error_patterns': error_patterns,
|
||||
'recent_executions': executions
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error getting workflow health: {e}")
|
||||
return {}
|
||||
|
||||
def debug_execution(self, execution_id: str) -> Dict:
|
||||
"""Debug a specific workflow execution"""
|
||||
try:
|
||||
print(f"🔍 Debugging execution {execution_id}...")
|
||||
|
||||
# Get execution details
|
||||
execution = self.client.get_execution(execution_id)
|
||||
|
||||
# Analyze execution logs
|
||||
analysis = self.analyzer.analyze_execution_logs(execution)
|
||||
|
||||
# Get detailed logs
|
||||
logs = self.monitor.get_execution_logs(execution_id)
|
||||
|
||||
# Display debug information
|
||||
print(f"🚀 Execution Status: {analysis['status']}")
|
||||
print(f"⏱️ Duration: {analysis['total_duration']:.2f}s")
|
||||
|
||||
if analysis['errors']:
|
||||
print(f"\n❌ Errors Found:")
|
||||
for error in analysis['errors']:
|
||||
print(f" • {error.get('message', 'Unknown error')}")
|
||||
|
||||
if analysis['performance_issues']:
|
||||
print(f"\n⚠️ Performance Issues:")
|
||||
for issue in analysis['performance_issues']:
|
||||
print(f" • {issue.get('description', 'Unknown issue')}")
|
||||
|
||||
return {
|
||||
'execution': execution,
|
||||
'analysis': analysis,
|
||||
'logs': logs
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error debugging execution: {e}")
|
||||
return {}
|
||||
|
||||
def _display_analysis_results(self, health_report):
|
||||
"""Display workflow analysis results"""
|
||||
print(f"\n📊 Analysis Results:")
|
||||
print(f" Health Score: {health_report.health_score:.1f}/100")
|
||||
print(f" Issues Found: {len(health_report.issues)}")
|
||||
print(f" Suggestions: {len(health_report.suggestions)}")
|
||||
|
||||
if health_report.issues:
|
||||
print(f"\n🚨 Issues Found:")
|
||||
for issue in health_report.issues[:5]: # Show top 5 issues
|
||||
severity = issue.get('severity', 'unknown').upper()
|
||||
description = issue.get('description', 'No description')
|
||||
print(f" [{severity}] {description}")
|
||||
|
||||
if health_report.suggestions:
|
||||
print(f"\n💡 Suggestions:")
|
||||
for suggestion in health_report.suggestions[:5]: # Show top 5 suggestions
|
||||
print(f" • {suggestion}")
|
||||
|
||||
if health_report.error_patterns:
|
||||
print(f"\n🔍 Error Patterns:")
|
||||
for pattern in health_report.error_patterns[:3]: # Show top 3 patterns
|
||||
print(f" • {pattern['pattern']}: {pattern['frequency']} occurrences")
|
||||
|
||||
def _display_test_results(self, test_results):
|
||||
"""Display test execution results"""
|
||||
passed = len([r for r in test_results if r.get('passed', r.get('success'))])
|
||||
total = len(test_results)
|
||||
|
||||
print(f"\n🧪 Test Results: {passed}/{total} passed ({passed/total*100:.1f}%)")
|
||||
|
||||
for result in test_results:
|
||||
test_name = result.get('test_name', 'Unknown')
|
||||
status = "✅ PASS" if result.get('passed', result.get('success')) else "❌ FAIL"
|
||||
duration = result.get('duration') or result.get('execution_time')
|
||||
|
||||
if duration:
|
||||
print(f" {status} {test_name} ({duration:.2f}s)")
|
||||
else:
|
||||
print(f" {status} {test_name}")
|
||||
|
||||
if not result.get('passed', result.get('success')) and result.get('error_message'):
|
||||
print(f" Error: {result['error_message']}")
|
||||
|
||||
def _display_improvement_results(self, improvement_results):
|
||||
"""Display workflow improvement results"""
|
||||
if not improvement_results:
|
||||
print("🔧 No improvements were made")
|
||||
return
|
||||
|
||||
print(f"\n🔧 Improvement Results ({len(improvement_results)} iterations):")
|
||||
|
||||
for result in improvement_results:
|
||||
status = "✅ SUCCESS" if result.success else "❌ FAILED"
|
||||
print(f" Iteration {result.iteration}: {status}")
|
||||
|
||||
if result.improvements_made:
|
||||
for improvement in result.improvements_made:
|
||||
print(f" • {improvement}")
|
||||
|
||||
if result.performance_metrics:
|
||||
metrics = result.performance_metrics
|
||||
if metrics.get('success_rate_improvement', 0) > 0:
|
||||
print(f" 📈 Success rate improved by {metrics['success_rate_improvement']*100:.1f}%")
|
||||
|
||||
def _display_execution_summary(self, summary):
|
||||
"""Display execution monitoring summary"""
|
||||
print(f"\n📊 Execution Summary ({summary['time_period_hours']} hours):")
|
||||
print(f" Total Executions: {summary['total_executions']}")
|
||||
print(f" Success Rate: {summary['success_rate']:.1f}%")
|
||||
print(f" Average Duration: {summary['average_duration_seconds']:.2f}s")
|
||||
|
||||
if summary['workflow_statistics']:
|
||||
print(f"\n📈 Workflow Statistics:")
|
||||
for workflow_id, stats in summary['workflow_statistics'].items():
|
||||
success_rate = (stats['success'] / stats['total'] * 100) if stats['total'] > 0 else 0
|
||||
print(f" {workflow_id[:8]}...: {stats['total']} executions, {success_rate:.1f}% success")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main CLI interface"""
|
||||
parser = argparse.ArgumentParser(description="N8N Workflow Assistant")
|
||||
parser.add_argument("--config", default="n8n_api_credentials.json",
|
||||
help="Path to N8N API configuration file")
|
||||
|
||||
subparsers = parser.add_subparsers(dest="command", help="Available commands")
|
||||
|
||||
# List workflows command
|
||||
subparsers.add_parser("list", help="List all workflows")
|
||||
|
||||
# Analyze workflow command
|
||||
analyze_parser = subparsers.add_parser("analyze", help="Analyze workflow")
|
||||
analyze_parser.add_argument("workflow_id", help="Workflow ID to analyze")
|
||||
analyze_parser.add_argument("--no-executions", action="store_true",
|
||||
help="Skip execution analysis")
|
||||
|
||||
# Test workflow command
|
||||
test_parser = subparsers.add_parser("test", help="Test workflow")
|
||||
test_parser.add_argument("workflow_id", help="Workflow ID to test")
|
||||
test_parser.add_argument("--data", help="JSON test data file")
|
||||
test_parser.add_argument("--suite", action="store_true",
|
||||
help="Create and run comprehensive test suite")
|
||||
|
||||
# Improve workflow command
|
||||
improve_parser = subparsers.add_parser("improve", help="Improve workflow")
|
||||
improve_parser.add_argument("workflow_id", help="Workflow ID to improve")
|
||||
improve_parser.add_argument("--iterations", type=int, default=3,
|
||||
help="Maximum improvement iterations")
|
||||
|
||||
# Monitor workflow command
|
||||
monitor_parser = subparsers.add_parser("monitor", help="Monitor workflow")
|
||||
monitor_parser.add_argument("workflow_id", help="Workflow ID to monitor")
|
||||
monitor_parser.add_argument("--duration", type=int, default=60,
|
||||
help="Monitoring duration in minutes")
|
||||
|
||||
# Health check command
|
||||
health_parser = subparsers.add_parser("health", help="Check workflow health")
|
||||
health_parser.add_argument("workflow_id", help="Workflow ID to check")
|
||||
|
||||
# Debug execution command
|
||||
debug_parser = subparsers.add_parser("debug", help="Debug execution")
|
||||
debug_parser.add_argument("execution_id", help="Execution ID to debug")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.command:
|
||||
parser.print_help()
|
||||
return
|
||||
|
||||
# Initialize assistant
|
||||
assistant = N8NAssistant(args.config)
|
||||
|
||||
# Execute command
|
||||
try:
|
||||
if args.command == "list":
|
||||
assistant.list_workflows()
|
||||
|
||||
elif args.command == "analyze":
|
||||
assistant.analyze_workflow(args.workflow_id, not args.no_executions)
|
||||
|
||||
elif args.command == "test":
|
||||
test_data = None
|
||||
if args.data:
|
||||
with open(args.data, 'r') as f:
|
||||
test_data = json.load(f)
|
||||
assistant.test_workflow(args.workflow_id, test_data, args.suite)
|
||||
|
||||
elif args.command == "improve":
|
||||
assistant.improve_workflow(args.workflow_id, args.iterations)
|
||||
|
||||
elif args.command == "monitor":
|
||||
assistant.monitor_workflow(args.workflow_id, args.duration)
|
||||
|
||||
elif args.command == "health":
|
||||
assistant.get_workflow_health(args.workflow_id)
|
||||
|
||||
elif args.command == "debug":
|
||||
assistant.debug_execution(args.execution_id)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Operation cancelled by user")
|
||||
except Exception as e:
|
||||
print(f"❌ Error executing command: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
188
claude_n8n/tools/n8n_client.py
Normal file
188
claude_n8n/tools/n8n_client.py
Normal file
@@ -0,0 +1,188 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
N8N API Client - Core utility for interacting with N8N workflows
|
||||
Provides comprehensive workflow management capabilities for Claude Code CLI
|
||||
"""
|
||||
|
||||
import json
|
||||
import requests
|
||||
import time
|
||||
from typing import Dict, List, Optional, Any
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
@dataclass
|
||||
class N8NConfig:
|
||||
"""Configuration for N8N API connection"""
|
||||
api_url: str
|
||||
api_key: str
|
||||
headers: Dict[str, str]
|
||||
|
||||
|
||||
class N8NClient:
|
||||
"""Main client for N8N API operations"""
|
||||
|
||||
def __init__(self, config_path: str = "n8n_api_credentials.json"):
|
||||
"""Initialize N8N client with configuration"""
|
||||
self.config = self._load_config(config_path)
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update(self.config.headers)
|
||||
|
||||
def _load_config(self, config_path: str) -> N8NConfig:
|
||||
"""Load N8N configuration from JSON file"""
|
||||
try:
|
||||
with open(config_path, 'r') as f:
|
||||
config_data = json.load(f)
|
||||
return N8NConfig(
|
||||
api_url=config_data['api_url'],
|
||||
api_key=config_data['api_key'],
|
||||
headers=config_data['headers']
|
||||
)
|
||||
except Exception as e:
|
||||
raise Exception(f"Failed to load N8N configuration: {e}")
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, data: Optional[Dict] = None) -> Dict:
|
||||
"""Make authenticated request to N8N API"""
|
||||
url = f"{self.config.api_url.rstrip('/')}/{endpoint.lstrip('/')}"
|
||||
|
||||
try:
|
||||
if method.upper() == 'GET':
|
||||
response = self.session.get(url, params=data)
|
||||
elif method.upper() == 'POST':
|
||||
response = self.session.post(url, json=data)
|
||||
elif method.upper() == 'PUT':
|
||||
response = self.session.put(url, json=data)
|
||||
elif method.upper() == 'DELETE':
|
||||
response = self.session.delete(url)
|
||||
else:
|
||||
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||
|
||||
response.raise_for_status()
|
||||
return response.json() if response.content else {}
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
raise Exception(f"N8N API request failed: {e}")
|
||||
|
||||
# Workflow Management Methods
|
||||
|
||||
def list_workflows(self) -> List[Dict]:
|
||||
"""Get list of all workflows"""
|
||||
response = self._make_request('GET', '/workflows')
|
||||
# N8N API returns workflows in a 'data' property
|
||||
return response.get('data', response) if isinstance(response, dict) else response
|
||||
|
||||
def get_workflow(self, workflow_id: str) -> Dict:
|
||||
"""Get specific workflow by ID"""
|
||||
return self._make_request('GET', f'/workflows/{workflow_id}')
|
||||
|
||||
def create_workflow(self, workflow_data: Dict) -> Dict:
|
||||
"""Create new workflow"""
|
||||
return self._make_request('POST', '/workflows', workflow_data)
|
||||
|
||||
def update_workflow(self, workflow_id: str, workflow_data: Dict) -> Dict:
|
||||
"""Update existing workflow"""
|
||||
# Clean payload to only include API-allowed fields
|
||||
clean_payload = {
|
||||
'name': workflow_data['name'],
|
||||
'nodes': workflow_data['nodes'],
|
||||
'connections': workflow_data['connections'],
|
||||
'settings': workflow_data.get('settings', {}),
|
||||
'staticData': workflow_data.get('staticData', {})
|
||||
}
|
||||
return self._make_request('PUT', f'/workflows/{workflow_id}', clean_payload)
|
||||
|
||||
def delete_workflow(self, workflow_id: str) -> Dict:
|
||||
"""Delete workflow"""
|
||||
return self._make_request('DELETE', f'/workflows/{workflow_id}')
|
||||
|
||||
def activate_workflow(self, workflow_id: str) -> Dict:
|
||||
"""Activate workflow"""
|
||||
return self._make_request('POST', f'/workflows/{workflow_id}/activate')
|
||||
|
||||
def deactivate_workflow(self, workflow_id: str) -> Dict:
|
||||
"""Deactivate workflow"""
|
||||
return self._make_request('POST', f'/workflows/{workflow_id}/deactivate')
|
||||
|
||||
# Execution Methods
|
||||
|
||||
def execute_workflow(self, workflow_id: str, test_data: Optional[Dict] = None) -> Dict:
|
||||
"""Execute workflow with optional test data"""
|
||||
payload = {"data": test_data} if test_data else {}
|
||||
return self._make_request('POST', f'/workflows/{workflow_id}/execute', payload)
|
||||
|
||||
def get_executions(self, workflow_id: Optional[str] = None, limit: int = 20) -> List[Dict]:
|
||||
"""Get workflow executions"""
|
||||
params = {"limit": limit}
|
||||
if workflow_id:
|
||||
params["workflowId"] = workflow_id
|
||||
return self._make_request('GET', '/executions', params)
|
||||
|
||||
def get_execution(self, execution_id: str, include_data: bool = True) -> Dict:
|
||||
"""Get specific execution details with full data"""
|
||||
params = {'includeData': 'true'} if include_data else {}
|
||||
return self._make_request('GET', f'/executions/{execution_id}', params)
|
||||
|
||||
def delete_execution(self, execution_id: str) -> Dict:
|
||||
"""Delete execution"""
|
||||
return self._make_request('DELETE', f'/executions/{execution_id}')
|
||||
|
||||
# Utility Methods
|
||||
|
||||
def find_workflow_by_name(self, name: str) -> Optional[Dict]:
|
||||
"""Find workflow by name"""
|
||||
workflows = self.list_workflows()
|
||||
for workflow in workflows:
|
||||
if workflow.get('name') == name:
|
||||
return workflow
|
||||
return None
|
||||
|
||||
def wait_for_execution(self, execution_id: str, timeout: int = 300, poll_interval: int = 5) -> Dict:
|
||||
"""Wait for execution to complete"""
|
||||
start_time = time.time()
|
||||
|
||||
while time.time() - start_time < timeout:
|
||||
execution = self.get_execution(execution_id)
|
||||
status = execution.get('status')
|
||||
|
||||
if status in ['success', 'error', 'cancelled']:
|
||||
return execution
|
||||
|
||||
time.sleep(poll_interval)
|
||||
|
||||
raise TimeoutError(f"Execution {execution_id} did not complete within {timeout} seconds")
|
||||
|
||||
def get_workflow_health(self, workflow_id: str, days: int = 7) -> Dict:
|
||||
"""Get workflow health statistics"""
|
||||
executions = self.get_executions(workflow_id, limit=100)
|
||||
|
||||
recent_executions = []
|
||||
cutoff_time = datetime.now().timestamp() - (days * 24 * 3600)
|
||||
|
||||
for execution in executions:
|
||||
if execution.get('startedAt'):
|
||||
exec_time = datetime.fromisoformat(execution['startedAt'].replace('Z', '+00:00')).timestamp()
|
||||
if exec_time > cutoff_time:
|
||||
recent_executions.append(execution)
|
||||
|
||||
total = len(recent_executions)
|
||||
success = len([e for e in recent_executions if e.get('status') == 'success'])
|
||||
errors = len([e for e in recent_executions if e.get('status') == 'error'])
|
||||
|
||||
return {
|
||||
'total_executions': total,
|
||||
'success_count': success,
|
||||
'error_count': errors,
|
||||
'success_rate': (success / total * 100) if total > 0 else 0,
|
||||
'recent_executions': recent_executions
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Quick test of the client
|
||||
try:
|
||||
client = N8NClient()
|
||||
workflows = client.list_workflows()
|
||||
print(f"Connected to N8N successfully. Found {len(workflows)} workflows.")
|
||||
except Exception as e:
|
||||
print(f"Failed to connect to N8N: {e}")
|
||||
442
claude_n8n/tools/n8n_debugger.py
Normal file
442
claude_n8n/tools/n8n_debugger.py
Normal file
@@ -0,0 +1,442 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
N8N Workflow Debugger - Real-time error detection and test data injection
|
||||
"""
|
||||
|
||||
import json
|
||||
import requests
|
||||
import time
|
||||
import copy
|
||||
from typing import Dict, List, Optional, Any
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
class N8NDebugger:
|
||||
"""Advanced N8N debugging tools with test injection and real-time monitoring"""
|
||||
|
||||
def __init__(self, config_path: str = "n8n_api_credentials.json"):
|
||||
self.config = self._load_config(config_path)
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update(self.config['headers'])
|
||||
self.api_url = self.config['api_url']
|
||||
|
||||
def _load_config(self, config_path: str) -> Dict:
|
||||
with open(config_path, 'r') as f:
|
||||
return json.load(f)
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, data: Optional[Dict] = None, params: Optional[Dict] = None) -> Dict:
|
||||
"""Enhanced request method with better error reporting"""
|
||||
url = f"{self.api_url.rstrip('/')}/{endpoint.lstrip('/')}"
|
||||
|
||||
try:
|
||||
if method.upper() == 'GET':
|
||||
response = self.session.get(url, params=params)
|
||||
elif method.upper() == 'POST':
|
||||
response = self.session.post(url, json=data, params=params)
|
||||
elif method.upper() == 'PUT':
|
||||
response = self.session.put(url, json=data)
|
||||
else:
|
||||
raise ValueError(f"Unsupported method: {method}")
|
||||
|
||||
response.raise_for_status()
|
||||
return response.json() if response.content else {}
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
print(f"❌ API Error: {method} {url}")
|
||||
print(f" Status: {getattr(e.response, 'status_code', 'Unknown')}")
|
||||
print(f" Error: {e}")
|
||||
if hasattr(e, 'response') and e.response:
|
||||
try:
|
||||
error_data = e.response.json()
|
||||
print(f" Details: {error_data}")
|
||||
except:
|
||||
print(f" Raw response: {e.response.text[:500]}")
|
||||
raise
|
||||
|
||||
def create_test_workflow(self, base_workflow_id: str, test_node_name: str, test_data: Any) -> str:
|
||||
"""Create a minimal test workflow focused on a specific node"""
|
||||
print(f"🔧 Creating test workflow for node: {test_node_name}")
|
||||
|
||||
# Get the original workflow
|
||||
workflow = self._make_request('GET', f'/workflows/{base_workflow_id}')
|
||||
|
||||
# Find the target node and its dependencies
|
||||
target_node = None
|
||||
nodes = workflow.get('nodes', [])
|
||||
|
||||
for node in nodes:
|
||||
if node.get('name') == test_node_name:
|
||||
target_node = node
|
||||
break
|
||||
|
||||
if not target_node:
|
||||
raise ValueError(f"Node '{test_node_name}' not found in workflow")
|
||||
|
||||
# Create a minimal test workflow with just the target node and a manual trigger
|
||||
test_workflow = {
|
||||
'name': f'TEST_{test_node_name}_{int(time.time())}',
|
||||
'nodes': [
|
||||
{
|
||||
'id': 'manual-trigger',
|
||||
'name': 'Manual Trigger',
|
||||
'type': 'n8n-nodes-base.manualTrigger',
|
||||
'position': [100, 100],
|
||||
'parameters': {}
|
||||
},
|
||||
{
|
||||
'id': target_node.get('id', 'test-node'),
|
||||
'name': target_node['name'],
|
||||
'type': target_node['type'],
|
||||
'position': [300, 100],
|
||||
'parameters': target_node.get('parameters', {})
|
||||
}
|
||||
],
|
||||
'connections': {
|
||||
'Manual Trigger': {
|
||||
'main': [
|
||||
[
|
||||
{
|
||||
'node': target_node['name'],
|
||||
'type': 'main',
|
||||
'index': 0
|
||||
}
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
'active': False,
|
||||
'settings': {},
|
||||
'staticData': {}
|
||||
}
|
||||
|
||||
# Create the test workflow
|
||||
created = self._make_request('POST', '/workflows', test_workflow)
|
||||
test_workflow_id = created.get('id')
|
||||
|
||||
print(f"✅ Created test workflow: {test_workflow_id}")
|
||||
return test_workflow_id
|
||||
|
||||
def inject_test_data_and_execute(self, workflow_id: str, test_data: Dict) -> Dict:
|
||||
"""Execute workflow with specific test data and capture detailed results"""
|
||||
print(f"🚀 Executing workflow with test data: {test_data}")
|
||||
|
||||
try:
|
||||
# Execute with test data
|
||||
execution_result = self._make_request('POST', f'/workflows/{workflow_id}/execute', test_data)
|
||||
|
||||
# Get execution ID
|
||||
exec_id = None
|
||||
if isinstance(execution_result, dict):
|
||||
exec_id = execution_result.get('data', {}).get('id') or execution_result.get('id')
|
||||
|
||||
if not exec_id:
|
||||
print(f"⚠️ No execution ID returned, result: {execution_result}")
|
||||
return execution_result
|
||||
|
||||
print(f"📊 Monitoring execution: {exec_id}")
|
||||
|
||||
# Monitor execution with detailed logging
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < 30: # 30 second timeout
|
||||
exec_details = self._make_request('GET', f'/executions/{exec_id}')
|
||||
status = exec_details.get('status')
|
||||
|
||||
if status in ['success', 'error', 'cancelled']:
|
||||
print(f"🏁 Execution completed with status: {status}")
|
||||
|
||||
if status == 'error':
|
||||
self._analyze_execution_error(exec_details)
|
||||
|
||||
return exec_details
|
||||
|
||||
print(f"⏳ Status: {status}")
|
||||
time.sleep(1)
|
||||
|
||||
print("⏰ Execution timeout")
|
||||
return exec_details
|
||||
|
||||
except Exception as e:
|
||||
print(f"💥 Execution failed: {e}")
|
||||
raise
|
||||
|
||||
def _analyze_execution_error(self, execution_details: Dict):
|
||||
"""Deep analysis of execution errors"""
|
||||
print("\n🔍 DETAILED ERROR ANALYSIS")
|
||||
print("=" * 50)
|
||||
|
||||
if 'data' not in execution_details:
|
||||
print("❌ No execution data available")
|
||||
return
|
||||
|
||||
data = execution_details['data']
|
||||
|
||||
# Check for global errors
|
||||
if 'resultData' in data:
|
||||
result_data = data['resultData']
|
||||
|
||||
if 'error' in result_data:
|
||||
global_error = result_data['error']
|
||||
print(f"🚨 GLOBAL ERROR:")
|
||||
print(f" Type: {global_error.get('type', 'Unknown')}")
|
||||
print(f" Message: {global_error.get('message', global_error)}")
|
||||
|
||||
if 'stack' in global_error:
|
||||
print(f" Stack trace:")
|
||||
for line in str(global_error['stack']).split('\\n')[:5]:
|
||||
print(f" {line}")
|
||||
|
||||
# Analyze node-specific errors
|
||||
if 'runData' in result_data:
|
||||
print(f"\n📋 NODE EXECUTION DETAILS:")
|
||||
|
||||
for node_name, runs in result_data['runData'].items():
|
||||
print(f"\n 📦 Node: {node_name}")
|
||||
|
||||
for i, run in enumerate(runs):
|
||||
print(f" Run {i+1}:")
|
||||
|
||||
if 'error' in run:
|
||||
error = run['error']
|
||||
print(f" 🚨 ERROR: {error}")
|
||||
|
||||
if isinstance(error, dict):
|
||||
if 'message' in error:
|
||||
print(f" Message: {error['message']}")
|
||||
if 'type' in error:
|
||||
print(f" Type: {error['type']}")
|
||||
if 'stack' in error:
|
||||
stack_lines = str(error['stack']).split('\\n')[:3]
|
||||
print(f" Stack: {stack_lines}")
|
||||
|
||||
if 'data' in run:
|
||||
run_data = run['data']
|
||||
if 'main' in run_data and run_data['main']:
|
||||
print(f" ✅ Input data: {len(run_data['main'])} items")
|
||||
for j, item in enumerate(run_data['main'][:2]): # Show first 2 items
|
||||
print(f" Item {j+1}: {str(item)[:100]}...")
|
||||
|
||||
if 'startTime' in run:
|
||||
print(f" ⏱️ Started: {run['startTime']}")
|
||||
if 'executionTime' in run:
|
||||
print(f" ⏱️ Duration: {run['executionTime']}ms")
|
||||
|
||||
def test_information_extractor_with_samples(self, workflow_id: str) -> List[Dict]:
|
||||
"""Test Information Extractor with various problematic data samples"""
|
||||
print("\n🧪 TESTING INFORMATION EXTRACTOR WITH SAMPLE DATA")
|
||||
print("=" * 60)
|
||||
|
||||
# Create test data samples that might cause template errors
|
||||
test_samples = [
|
||||
{
|
||||
"name": "simple_text",
|
||||
"data": {"chunk": "This is a simple test message"}
|
||||
},
|
||||
{
|
||||
"name": "single_quotes",
|
||||
"data": {"chunk": "This message contains single quotes: it's working"}
|
||||
},
|
||||
{
|
||||
"name": "json_like_with_quotes",
|
||||
"data": {"chunk": '{"message": "it\'s a test with quotes"}'}
|
||||
},
|
||||
{
|
||||
"name": "template_like_syntax",
|
||||
"data": {"chunk": "Template syntax: {variable} with quote: that's it"}
|
||||
},
|
||||
{
|
||||
"name": "mixed_quotes_and_braces",
|
||||
"data": {"chunk": "Complex: {item: 'value'} and more {data: 'test'}"}
|
||||
},
|
||||
{
|
||||
"name": "czech_text_with_quotes",
|
||||
"data": {"chunk": "Český text s apostrofy: to je náš systém"}
|
||||
},
|
||||
{
|
||||
"name": "empty_chunk",
|
||||
"data": {"chunk": ""}
|
||||
},
|
||||
{
|
||||
"name": "null_chunk",
|
||||
"data": {"chunk": None}
|
||||
},
|
||||
{
|
||||
"name": "unicode_and_quotes",
|
||||
"data": {"chunk": "Unicode: ěščřžýáíé with quotes: that's nice"}
|
||||
}
|
||||
]
|
||||
|
||||
results = []
|
||||
|
||||
for sample in test_samples:
|
||||
print(f"\n🔬 Testing: {sample['name']}")
|
||||
print(f" Data: {sample['data']}")
|
||||
|
||||
try:
|
||||
# Create a temporary test workflow for this node
|
||||
test_workflow_id = self.create_test_workflow(workflow_id, 'Information Extractor', sample['data'])
|
||||
|
||||
try:
|
||||
# Execute with the test data
|
||||
result = self.inject_test_data_and_execute(test_workflow_id, sample['data'])
|
||||
|
||||
test_result = {
|
||||
'sample': sample['name'],
|
||||
'input_data': sample['data'],
|
||||
'status': result.get('status'),
|
||||
'success': result.get('status') == 'success',
|
||||
'error_details': None
|
||||
}
|
||||
|
||||
if result.get('status') == 'error':
|
||||
test_result['error_details'] = self._extract_error_summary(result)
|
||||
print(f" ❌ FAILED: {test_result['error_details']}")
|
||||
else:
|
||||
print(f" ✅ SUCCESS")
|
||||
|
||||
results.append(test_result)
|
||||
|
||||
finally:
|
||||
# Clean up test workflow
|
||||
try:
|
||||
self._make_request('DELETE', f'/workflows/{test_workflow_id}')
|
||||
except:
|
||||
pass
|
||||
|
||||
except Exception as e:
|
||||
print(f" 💥 EXCEPTION: {e}")
|
||||
results.append({
|
||||
'sample': sample['name'],
|
||||
'input_data': sample['data'],
|
||||
'status': 'exception',
|
||||
'success': False,
|
||||
'error_details': str(e)
|
||||
})
|
||||
|
||||
# Summary
|
||||
print(f"\n📊 TEST RESULTS SUMMARY")
|
||||
print("=" * 30)
|
||||
|
||||
success_count = len([r for r in results if r['success']])
|
||||
total_count = len(results)
|
||||
|
||||
print(f"✅ Successful: {success_count}/{total_count}")
|
||||
print(f"❌ Failed: {total_count - success_count}/{total_count}")
|
||||
|
||||
print(f"\n🚨 FAILED TESTS:")
|
||||
for result in results:
|
||||
if not result['success']:
|
||||
print(f" - {result['sample']}: {result.get('error_details', 'Unknown error')}")
|
||||
|
||||
return results
|
||||
|
||||
def _extract_error_summary(self, execution_details: Dict) -> str:
|
||||
"""Extract a concise error summary"""
|
||||
if 'data' not in execution_details:
|
||||
return "No execution data"
|
||||
|
||||
data = execution_details['data']
|
||||
|
||||
if 'resultData' in data:
|
||||
result_data = data['resultData']
|
||||
|
||||
# Check for global error
|
||||
if 'error' in result_data:
|
||||
error = result_data['error']
|
||||
if isinstance(error, dict):
|
||||
return error.get('message', str(error))
|
||||
return str(error)
|
||||
|
||||
# Check for node errors
|
||||
if 'runData' in result_data:
|
||||
for node_name, runs in result_data['runData'].items():
|
||||
for run in runs:
|
||||
if 'error' in run:
|
||||
error = run['error']
|
||||
if isinstance(error, dict):
|
||||
return f"{node_name}: {error.get('message', str(error))}"
|
||||
return f"{node_name}: {str(error)}"
|
||||
|
||||
return "Unknown error"
|
||||
|
||||
def monitor_workflow_realtime(self, workflow_id: str, duration_seconds: int = 60):
|
||||
"""Monitor workflow executions in real-time and catch errors immediately"""
|
||||
print(f"\n📡 REAL-TIME MONITORING ({duration_seconds}s)")
|
||||
print("=" * 40)
|
||||
|
||||
start_time = time.time()
|
||||
seen_executions = set()
|
||||
|
||||
# Get initial executions
|
||||
try:
|
||||
initial_execs = self._make_request('GET', '/executions', {'workflowId': workflow_id, 'limit': 5})
|
||||
for exec_data in initial_execs.get('data', []):
|
||||
seen_executions.add(exec_data['id'])
|
||||
except:
|
||||
pass
|
||||
|
||||
while time.time() - start_time < duration_seconds:
|
||||
try:
|
||||
# Get recent executions
|
||||
executions = self._make_request('GET', '/executions', {'workflowId': workflow_id, 'limit': 10})
|
||||
|
||||
for exec_data in executions.get('data', []):
|
||||
exec_id = exec_data['id']
|
||||
|
||||
if exec_id not in seen_executions:
|
||||
seen_executions.add(exec_id)
|
||||
status = exec_data.get('status')
|
||||
started_at = exec_data.get('startedAt', '')
|
||||
|
||||
print(f"\n🆕 New execution: {exec_id}")
|
||||
print(f" Started: {started_at}")
|
||||
print(f" Status: {status}")
|
||||
|
||||
if status == 'error':
|
||||
print(f" 🚨 ERROR DETECTED - Getting details...")
|
||||
|
||||
# Get full error details
|
||||
try:
|
||||
details = self._make_request('GET', f'/executions/{exec_id}')
|
||||
self._analyze_execution_error(details)
|
||||
except Exception as e:
|
||||
print(f" Failed to get error details: {e}")
|
||||
|
||||
elif status == 'success':
|
||||
print(f" ✅ Success")
|
||||
|
||||
time.sleep(2)
|
||||
|
||||
except Exception as e:
|
||||
print(f" Monitoring error: {e}")
|
||||
time.sleep(2)
|
||||
|
||||
print(f"\n📊 Monitoring complete. Watched for {duration_seconds} seconds.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test the debugger
|
||||
debugger = N8NDebugger()
|
||||
|
||||
print("🔧 N8N Debugger initialized")
|
||||
print("Testing Information Extractor with sample data...")
|
||||
|
||||
try:
|
||||
# Test with various data samples
|
||||
results = debugger.test_information_extractor_with_samples('w6Sz5trluur5qdMj')
|
||||
|
||||
print("\n🎯 FINAL RESULTS:")
|
||||
failed_tests = [r for r in results if not r['success']]
|
||||
if failed_tests:
|
||||
print("❌ Template errors found with these data patterns:")
|
||||
for test in failed_tests:
|
||||
print(f" - {test['sample']}: {test['input_data']}")
|
||||
print(f" Error: {test['error_details']}")
|
||||
else:
|
||||
print("✅ All tests passed - no template errors detected")
|
||||
|
||||
except Exception as e:
|
||||
print(f"💥 Debugger failed: {e}")
|
||||
|
||||
# Fall back to real-time monitoring
|
||||
print("\n📡 Falling back to real-time monitoring...")
|
||||
debugger.monitor_workflow_realtime('w6Sz5trluur5qdMj', 30)
|
||||
258
claude_n8n/tools/real_time_error_catcher.py
Normal file
258
claude_n8n/tools/real_time_error_catcher.py
Normal file
@@ -0,0 +1,258 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Real-time Error Catcher - Monitor Matrix workflow and catch template errors as they happen
|
||||
"""
|
||||
|
||||
import sys
|
||||
sys.path.append('/home/klas/claude_n8n/tools')
|
||||
import json
|
||||
import requests
|
||||
import time
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from typing import Dict, Optional
|
||||
|
||||
|
||||
class RealTimeErrorCatcher:
|
||||
"""Catch template errors in real-time by monitoring workflow executions"""
|
||||
|
||||
def __init__(self, config_path: str = "n8n_api_credentials.json"):
|
||||
self.config = self._load_config(config_path)
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update(self.config['headers'])
|
||||
self.api_url = self.config['api_url']
|
||||
self.monitoring = False
|
||||
self.error_found = False
|
||||
self.detailed_errors = []
|
||||
|
||||
def _load_config(self, config_path: str) -> Dict:
|
||||
with open(config_path, 'r') as f:
|
||||
return json.load(f)
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, params: Optional[Dict] = None) -> Dict:
|
||||
url = f"{self.api_url.rstrip('/')}/{endpoint.lstrip('/')}"
|
||||
|
||||
try:
|
||||
if method.upper() == 'GET':
|
||||
response = self.session.get(url, params=params)
|
||||
else:
|
||||
raise ValueError(f"Only GET supported in this tool")
|
||||
|
||||
response.raise_for_status()
|
||||
return response.json() if response.content else {}
|
||||
|
||||
except Exception as e:
|
||||
print(f"API Error: {e}")
|
||||
return {}
|
||||
|
||||
def start_monitoring(self, workflow_id: str):
|
||||
"""Start monitoring workflow executions for errors"""
|
||||
self.monitoring = True
|
||||
self.error_found = False
|
||||
self.detailed_errors = []
|
||||
|
||||
print(f"🎯 Starting real-time error monitoring for Matrix workflow")
|
||||
print(f"🔍 Monitoring workflow: {workflow_id}")
|
||||
print(f"⏰ Started at: {datetime.now().strftime('%H:%M:%S')}")
|
||||
print("=" * 60)
|
||||
|
||||
# Get baseline executions
|
||||
seen_executions = set()
|
||||
try:
|
||||
initial = self._make_request('GET', '/executions', {'workflowId': workflow_id, 'limit': 5})
|
||||
for exec_data in initial.get('data', []):
|
||||
seen_executions.add(exec_data['id'])
|
||||
print(f"📊 Baseline: {len(seen_executions)} existing executions")
|
||||
except:
|
||||
print("⚠️ Could not get baseline executions")
|
||||
|
||||
# Monitor loop
|
||||
consecutive_successes = 0
|
||||
while self.monitoring:
|
||||
try:
|
||||
# Get recent executions
|
||||
executions = self._make_request('GET', '/executions', {'workflowId': workflow_id, 'limit': 10})
|
||||
|
||||
new_executions = []
|
||||
for exec_data in executions.get('data', []):
|
||||
exec_id = exec_data['id']
|
||||
if exec_id not in seen_executions:
|
||||
seen_executions.add(exec_id)
|
||||
new_executions.append(exec_data)
|
||||
|
||||
# Process new executions
|
||||
for exec_data in new_executions:
|
||||
exec_id = exec_data['id']
|
||||
status = exec_data.get('status')
|
||||
started_at = exec_data.get('startedAt', '')
|
||||
|
||||
timestamp = datetime.now().strftime('%H:%M:%S')
|
||||
print(f"\n🆕 [{timestamp}] New execution: {exec_id}")
|
||||
print(f" Status: {status}")
|
||||
|
||||
if status == 'error':
|
||||
print(f" 🚨 ERROR DETECTED! Analyzing...")
|
||||
|
||||
# Get detailed error information
|
||||
try:
|
||||
details = self._make_request('GET', f'/executions/{exec_id}')
|
||||
error_info = self._deep_analyze_error(details)
|
||||
|
||||
if error_info:
|
||||
self.detailed_errors.append({
|
||||
'execution_id': exec_id,
|
||||
'timestamp': timestamp,
|
||||
'error_info': error_info
|
||||
})
|
||||
|
||||
if 'template' in error_info.lower() or 'single' in error_info.lower():
|
||||
print(f" 💥 TEMPLATE ERROR CONFIRMED!")
|
||||
print(f" 📝 Error: {error_info}")
|
||||
self.error_found = True
|
||||
|
||||
# Try to get the input data that caused this
|
||||
input_data = self._extract_input_data(details)
|
||||
if input_data:
|
||||
print(f" 📥 Input data that triggered error:")
|
||||
print(f" {input_data}")
|
||||
else:
|
||||
print(f" 📝 Non-template error: {error_info}")
|
||||
else:
|
||||
print(f" ❓ Could not extract error details")
|
||||
|
||||
except Exception as e:
|
||||
print(f" ❌ Failed to analyze error: {e}")
|
||||
|
||||
elif status == 'success':
|
||||
consecutive_successes += 1
|
||||
print(f" ✅ Success (consecutive: {consecutive_successes})")
|
||||
|
||||
else:
|
||||
print(f" ⏳ Status: {status}")
|
||||
|
||||
# If we found a template error, we can stop or continue monitoring
|
||||
if self.error_found:
|
||||
print(f"\n🎯 TEMPLATE ERROR FOUND! Continuing to monitor for patterns...")
|
||||
|
||||
time.sleep(1) # Check every second
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print(f"\n⏹️ Monitoring stopped by user")
|
||||
break
|
||||
except Exception as e:
|
||||
print(f" ⚠️ Monitoring error: {e}")
|
||||
time.sleep(2)
|
||||
|
||||
self.monitoring = False
|
||||
print(f"\n📊 MONITORING SUMMARY")
|
||||
print("=" * 30)
|
||||
print(f"🔍 Total errors detected: {len(self.detailed_errors)}")
|
||||
print(f"💥 Template errors found: {len([e for e in self.detailed_errors if 'template' in e['error_info'].lower()])}")
|
||||
|
||||
return self.detailed_errors
|
||||
|
||||
def _deep_analyze_error(self, execution_details: Dict) -> str:
|
||||
"""Extract detailed error information"""
|
||||
if 'data' not in execution_details:
|
||||
return "No execution data"
|
||||
|
||||
data = execution_details['data']
|
||||
|
||||
# Check for global errors first
|
||||
if 'resultData' in data:
|
||||
result_data = data['resultData']
|
||||
|
||||
if 'error' in result_data:
|
||||
error = result_data['error']
|
||||
if isinstance(error, dict):
|
||||
message = error.get('message', str(error))
|
||||
error_type = error.get('type', '')
|
||||
stack = error.get('stack', '')
|
||||
|
||||
full_error = f"Type: {error_type}, Message: {message}"
|
||||
if 'template' in message.lower() or 'single' in message.lower():
|
||||
if stack:
|
||||
# Extract relevant stack trace lines
|
||||
stack_lines = str(stack).split('\\n')[:3]
|
||||
full_error += f", Stack: {stack_lines}"
|
||||
|
||||
return full_error
|
||||
else:
|
||||
return str(error)
|
||||
|
||||
# Check node-specific errors
|
||||
if 'runData' in result_data:
|
||||
for node_name, runs in result_data['runData'].items():
|
||||
for run in runs:
|
||||
if 'error' in run:
|
||||
error = run['error']
|
||||
if isinstance(error, dict):
|
||||
message = error.get('message', str(error))
|
||||
return f"Node {node_name}: {message}"
|
||||
else:
|
||||
return f"Node {node_name}: {str(error)}"
|
||||
|
||||
return "Unknown error structure"
|
||||
|
||||
def _extract_input_data(self, execution_details: Dict) -> Optional[str]:
|
||||
"""Try to extract the input data that caused the error"""
|
||||
if 'data' not in execution_details:
|
||||
return None
|
||||
|
||||
data = execution_details['data']
|
||||
|
||||
if 'resultData' in data and 'runData' in data['resultData']:
|
||||
run_data = data['resultData']['runData']
|
||||
|
||||
# Look for data that would go into Information Extractor
|
||||
for node_name in ['Split Out', 'Loop Over Items', 'Code4', 'HTTP Request3']:
|
||||
if node_name in run_data:
|
||||
node_runs = run_data[node_name]
|
||||
for run in node_runs:
|
||||
if 'data' in run and 'main' in run['data']:
|
||||
main_data = run['data']['main']
|
||||
if main_data and len(main_data) > 0:
|
||||
# Extract chunk data
|
||||
for item in main_data[:2]: # First 2 items
|
||||
if isinstance(item, dict) and 'chunk' in item:
|
||||
chunk = item['chunk']
|
||||
if isinstance(chunk, str) and len(chunk) > 0:
|
||||
return f"chunk: {repr(chunk[:200])}..."
|
||||
|
||||
return None
|
||||
|
||||
def stop_monitoring(self):
|
||||
"""Stop the monitoring"""
|
||||
self.monitoring = False
|
||||
|
||||
|
||||
def manual_trigger_test():
|
||||
"""Manually trigger some test scenarios"""
|
||||
print("🧪 MANUAL TEST SCENARIOS")
|
||||
print("This would inject test data if we had manual trigger capability")
|
||||
print("For now, we rely on the scheduled executions to trigger errors")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
catcher = RealTimeErrorCatcher()
|
||||
|
||||
print("🔧 Real-time Error Catcher initialized")
|
||||
print("🎯 This tool will monitor Matrix workflow executions and catch template errors")
|
||||
print("💡 The workflow runs every second, so errors should be detected quickly")
|
||||
|
||||
try:
|
||||
# Start monitoring
|
||||
errors = catcher.start_monitoring('w6Sz5trluur5qdMj')
|
||||
|
||||
print(f"\n🎯 FINAL RESULTS:")
|
||||
if errors:
|
||||
print(f"❌ Found {len(errors)} errors:")
|
||||
for error in errors:
|
||||
print(f" - {error['timestamp']}: {error['error_info']}")
|
||||
else:
|
||||
print("✅ No errors detected during monitoring period")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n⏹️ Monitoring interrupted by user")
|
||||
except Exception as e:
|
||||
print(f"\n💥 Error catcher failed: {e}")
|
||||
279
claude_n8n/tools/template_error_reproducer.py
Normal file
279
claude_n8n/tools/template_error_reproducer.py
Normal file
@@ -0,0 +1,279 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Template Error Reproducer - Find the exact input causing template errors
|
||||
"""
|
||||
|
||||
import sys
|
||||
sys.path.append('/home/klas/claude_n8n/tools')
|
||||
import json
|
||||
import requests
|
||||
from typing import Dict, Optional
|
||||
|
||||
class TemplateErrorReproducer:
|
||||
"""Reproduce template errors by analyzing the exact configuration"""
|
||||
|
||||
def __init__(self, config_path: str = "n8n_api_credentials.json"):
|
||||
self.config = self._load_config(config_path)
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update(self.config['headers'])
|
||||
self.api_url = self.config['api_url']
|
||||
|
||||
def _load_config(self, config_path: str) -> Dict:
|
||||
with open(config_path, 'r') as f:
|
||||
return json.load(f)
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, data: Optional[Dict] = None) -> Dict:
|
||||
url = f"{self.api_url.rstrip('/')}/{endpoint.lstrip('/')}"
|
||||
|
||||
try:
|
||||
if method.upper() == 'GET':
|
||||
response = self.session.get(url, params=data)
|
||||
elif method.upper() == 'PUT':
|
||||
response = self.session.put(url, json=data)
|
||||
else:
|
||||
raise ValueError(f"Unsupported method: {method}")
|
||||
|
||||
response.raise_for_status()
|
||||
return response.json() if response.content else {}
|
||||
|
||||
except Exception as e:
|
||||
print(f"API Error: {e}")
|
||||
if hasattr(e, 'response') and e.response:
|
||||
print(f"Response: {e.response.text[:300]}")
|
||||
raise
|
||||
|
||||
def analyze_information_extractor_config(self, workflow_id: str):
|
||||
"""Analyze the exact Information Extractor configuration"""
|
||||
print("🔍 ANALYZING INFORMATION EXTRACTOR CONFIGURATION")
|
||||
print("=" * 60)
|
||||
|
||||
# Get workflow
|
||||
workflow = self._make_request('GET', f'/workflows/{workflow_id}')
|
||||
|
||||
# Find Information Extractor node
|
||||
info_extractor = None
|
||||
for node in workflow.get('nodes', []):
|
||||
if node.get('name') == 'Information Extractor':
|
||||
info_extractor = node
|
||||
break
|
||||
|
||||
if not info_extractor:
|
||||
print("❌ Information Extractor node not found")
|
||||
return
|
||||
|
||||
print(f"✅ Found Information Extractor node")
|
||||
|
||||
params = info_extractor.get('parameters', {})
|
||||
|
||||
# Analyze each parameter that could cause template issues
|
||||
print(f"\n📋 PARAMETER ANALYSIS:")
|
||||
|
||||
for param_name, param_value in params.items():
|
||||
print(f"\n 📝 {param_name}:")
|
||||
|
||||
if isinstance(param_value, str):
|
||||
print(f" Type: string")
|
||||
print(f" Length: {len(param_value)}")
|
||||
print(f" Preview: {repr(param_value[:100])}...")
|
||||
|
||||
# Check for potential template issues
|
||||
if '{{' in param_value and '}}' in param_value:
|
||||
print(f" ⚠️ Contains N8N expressions")
|
||||
self._analyze_n8n_expression(param_value)
|
||||
|
||||
if '"' in param_value and "'" in param_value:
|
||||
print(f" ⚠️ Contains mixed quotes")
|
||||
|
||||
elif isinstance(param_value, dict):
|
||||
print(f" Type: object with {len(param_value)} properties")
|
||||
for key, value in param_value.items():
|
||||
if isinstance(value, str) and len(value) > 50:
|
||||
print(f" {key}: {type(value)} ({len(value)} chars)")
|
||||
|
||||
# Check for template issues in nested values
|
||||
if '"' in value and "'" in value:
|
||||
print(f" ⚠️ Mixed quotes detected")
|
||||
self._find_quote_issues(value, f"{param_name}.{key}")
|
||||
|
||||
return info_extractor
|
||||
|
||||
def _analyze_n8n_expression(self, expression: str):
|
||||
"""Analyze N8N expressions for potential issues"""
|
||||
print(f" 🔧 N8N Expression Analysis:")
|
||||
|
||||
# Extract the expression content
|
||||
if '{{' in expression and '}}' in expression:
|
||||
start = expression.find('{{') + 2
|
||||
end = expression.find('}}')
|
||||
expr_content = expression[start:end].strip()
|
||||
|
||||
print(f" Expression: {expr_content}")
|
||||
|
||||
# Check for problematic patterns
|
||||
if 'JSON.stringify' in expr_content:
|
||||
print(f" ⚠️ Uses JSON.stringify - can cause quote escaping issues")
|
||||
|
||||
if '$json.chunk' in expr_content:
|
||||
print(f" 🎯 LIKELY ISSUE: JSON.stringify($json.chunk)")
|
||||
print(f" 💡 When chunk contains single quotes, this creates:")
|
||||
print(f" JSON.stringify(\"text with 'quotes'\") → \"\\\"text with 'quotes'\\\"\"")
|
||||
print(f" 💡 The escaped quotes can break LangChain f-string parsing")
|
||||
|
||||
if '.replace(' in expr_content:
|
||||
print(f" ⚠️ Uses string replacement - check for quote handling")
|
||||
|
||||
def _find_quote_issues(self, text: str, context: str):
|
||||
"""Find specific quote-related issues"""
|
||||
print(f" 🔍 Quote Analysis for {context}:")
|
||||
|
||||
# Look for unescaped quotes in JSON-like strings
|
||||
lines = text.split('\n')
|
||||
for i, line in enumerate(lines):
|
||||
if '"' in line and "'" in line:
|
||||
# Check for single quotes inside double-quoted strings
|
||||
if line.count('"') >= 2:
|
||||
# Find content between quotes
|
||||
parts = line.split('"')
|
||||
for j in range(1, len(parts), 2): # Odd indices are inside quotes
|
||||
content = parts[j]
|
||||
if "'" in content and "\\'" not in content:
|
||||
print(f" Line {i+1}: Unescaped single quote in JSON string")
|
||||
print(f" Content: {repr(content)}")
|
||||
print(f" 🚨 THIS COULD CAUSE TEMPLATE ERRORS")
|
||||
|
||||
def create_fixed_configuration(self, workflow_id: str):
|
||||
"""Create a fixed version of the Information Extractor configuration"""
|
||||
print(f"\n🔧 CREATING FIXED CONFIGURATION")
|
||||
print("=" * 40)
|
||||
|
||||
workflow = self._make_request('GET', f'/workflows/{workflow_id}')
|
||||
|
||||
# Find and fix Information Extractor
|
||||
fixed = False
|
||||
for node in workflow.get('nodes', []):
|
||||
if node.get('name') == 'Information Extractor':
|
||||
params = node.get('parameters', {})
|
||||
|
||||
# Fix the text parameter that uses JSON.stringify
|
||||
if 'text' in params:
|
||||
current_text = params['text']
|
||||
print(f"Current text param: {repr(current_text)}")
|
||||
|
||||
if '{{ JSON.stringify($json.chunk) }}' in current_text:
|
||||
# Replace with a safer approach that doesn't use JSON.stringify
|
||||
new_text = '{{ $json.chunk || "" }}'
|
||||
params['text'] = new_text
|
||||
print(f"Fixed text param: {repr(new_text)}")
|
||||
fixed = True
|
||||
|
||||
# Check and fix any other string parameters with quote issues
|
||||
def fix_quotes_in_object(obj, path=""):
|
||||
changed = False
|
||||
if isinstance(obj, dict):
|
||||
for key, value in obj.items():
|
||||
if fix_quotes_in_object(value, f"{path}.{key}" if path else key):
|
||||
changed = True
|
||||
elif isinstance(obj, str):
|
||||
# Fix unescaped single quotes in JSON-like content
|
||||
if '"' in obj and "'" in obj and "\\'" not in obj:
|
||||
# This is a potential issue - try to fix it
|
||||
original = obj
|
||||
# Replace unescaped single quotes with escaped ones
|
||||
fixed_str = obj.replace("'", "\\'")
|
||||
if fixed_str != original:
|
||||
print(f"Fixed quotes in {path}: {repr(original[:50])}... → {repr(fixed_str[:50])}...")
|
||||
return True
|
||||
return changed
|
||||
|
||||
if fix_quotes_in_object(params):
|
||||
print("Applied additional quote fixes")
|
||||
fixed = True
|
||||
|
||||
break
|
||||
|
||||
if fixed:
|
||||
# Apply the fixes
|
||||
update_payload = {
|
||||
'name': workflow['name'],
|
||||
'nodes': workflow['nodes'],
|
||||
'connections': workflow['connections'],
|
||||
'settings': workflow.get('settings', {}),
|
||||
'staticData': workflow.get('staticData', {})
|
||||
}
|
||||
|
||||
try:
|
||||
result = self._make_request('PUT', f'/workflows/{workflow_id}', update_payload)
|
||||
print(f"✅ Applied fixes to workflow")
|
||||
print(f"Updated at: {result.get('updatedAt')}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to apply fixes: {e}")
|
||||
return False
|
||||
else:
|
||||
print("ℹ️ No fixes needed or no issues found")
|
||||
return False
|
||||
|
||||
def test_specific_inputs(self):
|
||||
"""Test specific inputs that are known to cause template issues"""
|
||||
print(f"\n🧪 TESTING SPECIFIC PROBLEMATIC INPUTS")
|
||||
print("=" * 45)
|
||||
|
||||
# These are inputs that commonly cause template parsing errors
|
||||
problematic_inputs = [
|
||||
"Message with single quote: it's working",
|
||||
'{"json": "with single quote: it\'s here"}',
|
||||
"Template-like: {variable} with quote: that's it",
|
||||
"Czech text: to je náš systém",
|
||||
"Mixed quotes: \"text with 'inner quotes'\"",
|
||||
]
|
||||
|
||||
for input_text in problematic_inputs:
|
||||
print(f"\n🔬 Testing input: {repr(input_text)}")
|
||||
|
||||
# Simulate what happens with JSON.stringify
|
||||
try:
|
||||
import json as py_json
|
||||
json_result = py_json.dumps(input_text)
|
||||
print(f" JSON.stringify result: {repr(json_result)}")
|
||||
|
||||
# Check if this would cause template issues
|
||||
if "\\'" in json_result:
|
||||
print(f" ⚠️ Contains escaped single quotes - potential template issue")
|
||||
elif '"' in json_result and "'" in json_result:
|
||||
print(f" ⚠️ Contains mixed quotes - potential template issue")
|
||||
else:
|
||||
print(f" ✅ Should be safe for templates")
|
||||
|
||||
except Exception as e:
|
||||
print(f" ❌ JSON conversion failed: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
reproducer = TemplateErrorReproducer()
|
||||
|
||||
print("🔧 Template Error Reproducer")
|
||||
print("🎯 This tool will analyze the exact configuration causing template errors")
|
||||
|
||||
try:
|
||||
# Analyze current configuration
|
||||
config = reproducer.analyze_information_extractor_config('w6Sz5trluur5qdMj')
|
||||
|
||||
# Test problematic inputs
|
||||
reproducer.test_specific_inputs()
|
||||
|
||||
# Create fixed configuration
|
||||
print(f"\n" + "="*60)
|
||||
fixed = reproducer.create_fixed_configuration('w6Sz5trluur5qdMj')
|
||||
|
||||
if fixed:
|
||||
print(f"\n✅ FIXES APPLIED!")
|
||||
print(f"🎯 The template error should now be resolved")
|
||||
print(f"💡 Key fix: Replaced JSON.stringify($json.chunk) with safer alternative")
|
||||
else:
|
||||
print(f"\n❓ No obvious template issues found in configuration")
|
||||
print(f"💡 The error might be caused by specific runtime data")
|
||||
|
||||
except Exception as e:
|
||||
print(f"💥 Analysis failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
438
claude_n8n/tools/workflow_analyzer.py
Normal file
438
claude_n8n/tools/workflow_analyzer.py
Normal file
@@ -0,0 +1,438 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Workflow Analyzer - Tools for analyzing N8N workflows and execution results
|
||||
Provides debugging, error analysis, and performance insights
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
from typing import Dict, List, Optional, Any, Tuple
|
||||
from datetime import datetime
|
||||
from dataclasses import dataclass
|
||||
from collections import defaultdict
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnalysisResult:
|
||||
"""Result of workflow analysis"""
|
||||
workflow_id: str
|
||||
issues: List[Dict]
|
||||
suggestions: List[str]
|
||||
performance_metrics: Dict
|
||||
error_patterns: List[Dict]
|
||||
health_score: float
|
||||
|
||||
|
||||
class WorkflowAnalyzer:
|
||||
"""Analyzes N8N workflows for issues, performance, and optimization opportunities"""
|
||||
|
||||
def __init__(self):
|
||||
self.common_issues = {
|
||||
'missing_error_handling': 'Node lacks error handling configuration',
|
||||
'hardcoded_credentials': 'Credentials are hardcoded instead of using credential store',
|
||||
'inefficient_loops': 'Loop structure may cause performance issues',
|
||||
'missing_validation': 'Input validation is missing or insufficient',
|
||||
'timeout_issues': 'Request timeout settings may be too aggressive',
|
||||
'rate_limit_violations': 'API rate limits not properly handled'
|
||||
}
|
||||
|
||||
def analyze_workflow_structure(self, workflow: Dict) -> Dict:
|
||||
"""Analyze workflow structure for common issues"""
|
||||
issues = []
|
||||
suggestions = []
|
||||
|
||||
nodes = workflow.get('nodes', [])
|
||||
connections = workflow.get('connections', {})
|
||||
|
||||
# Check for common structural issues
|
||||
issues.extend(self._check_error_handling(nodes))
|
||||
issues.extend(self._check_credential_usage(nodes))
|
||||
issues.extend(self._check_node_configurations(nodes))
|
||||
issues.extend(self._check_workflow_complexity(nodes, connections))
|
||||
|
||||
# Generate suggestions based on issues
|
||||
suggestions.extend(self._generate_suggestions(issues))
|
||||
|
||||
return {
|
||||
'issues': issues,
|
||||
'suggestions': suggestions,
|
||||
'node_count': len(nodes),
|
||||
'connection_count': sum(len(conns.get('main', [])) for conns in connections.values()),
|
||||
'complexity_score': self._calculate_complexity_score(nodes, connections)
|
||||
}
|
||||
|
||||
def analyze_execution_logs(self, execution_data: Dict) -> Dict:
|
||||
"""Analyze execution logs for errors and performance issues"""
|
||||
execution_id = execution_data.get('id')
|
||||
status = execution_data.get('status')
|
||||
data = execution_data.get('data', {})
|
||||
|
||||
analysis = {
|
||||
'execution_id': execution_id,
|
||||
'status': status,
|
||||
'errors': [],
|
||||
'warnings': [],
|
||||
'performance_issues': [],
|
||||
'node_timings': {},
|
||||
'total_duration': 0
|
||||
}
|
||||
|
||||
if status == 'error':
|
||||
analysis['errors'] = self._extract_errors(data)
|
||||
|
||||
# Analyze node performance
|
||||
if 'resultData' in data:
|
||||
analysis['node_timings'] = self._analyze_node_timings(data['resultData'])
|
||||
analysis['performance_issues'] = self._identify_performance_issues(analysis['node_timings'])
|
||||
|
||||
# Calculate total execution time
|
||||
start_time = execution_data.get('startedAt')
|
||||
finish_time = execution_data.get('finishedAt')
|
||||
if start_time and finish_time:
|
||||
start_dt = datetime.fromisoformat(start_time.replace('Z', '+00:00'))
|
||||
finish_dt = datetime.fromisoformat(finish_time.replace('Z', '+00:00'))
|
||||
analysis['total_duration'] = (finish_dt - start_dt).total_seconds()
|
||||
|
||||
return analysis
|
||||
|
||||
def find_error_patterns(self, executions: List[Dict]) -> List[Dict]:
|
||||
"""Identify recurring error patterns across multiple executions"""
|
||||
error_patterns = defaultdict(int)
|
||||
error_details = defaultdict(list)
|
||||
|
||||
for execution in executions:
|
||||
if execution.get('status') == 'error':
|
||||
errors = self._extract_errors(execution.get('data', {}))
|
||||
for error in errors:
|
||||
error_type = self._categorize_error(error)
|
||||
error_patterns[error_type] += 1
|
||||
error_details[error_type].append({
|
||||
'execution_id': execution.get('id'),
|
||||
'timestamp': execution.get('startedAt'),
|
||||
'error': error
|
||||
})
|
||||
|
||||
patterns = []
|
||||
for pattern, count in error_patterns.items():
|
||||
patterns.append({
|
||||
'pattern': pattern,
|
||||
'frequency': count,
|
||||
'percentage': (count / len(executions)) * 100,
|
||||
'examples': error_details[pattern][:3] # First 3 examples
|
||||
})
|
||||
|
||||
return sorted(patterns, key=lambda x: x['frequency'], reverse=True)
|
||||
|
||||
def generate_health_report(self, workflow: Dict, executions: List[Dict]) -> AnalysisResult:
|
||||
"""Generate comprehensive health report for a workflow"""
|
||||
workflow_id = workflow.get('id')
|
||||
|
||||
# Analyze workflow structure
|
||||
structure_analysis = self.analyze_workflow_structure(workflow)
|
||||
|
||||
# Analyze recent executions
|
||||
execution_analyses = [self.analyze_execution_logs(exec) for exec in executions[-10:]]
|
||||
error_patterns = self.find_error_patterns(executions)
|
||||
|
||||
# Calculate performance metrics
|
||||
performance_metrics = self._calculate_performance_metrics(execution_analyses)
|
||||
|
||||
# Calculate health score
|
||||
health_score = self._calculate_health_score(structure_analysis, execution_analyses, error_patterns)
|
||||
|
||||
# Combine all issues and suggestions
|
||||
all_issues = structure_analysis['issues']
|
||||
all_suggestions = structure_analysis['suggestions']
|
||||
|
||||
# Add execution-based suggestions
|
||||
if error_patterns:
|
||||
all_suggestions.extend(self._suggest_error_fixes(error_patterns))
|
||||
|
||||
return AnalysisResult(
|
||||
workflow_id=workflow_id,
|
||||
issues=all_issues,
|
||||
suggestions=all_suggestions,
|
||||
performance_metrics=performance_metrics,
|
||||
error_patterns=error_patterns,
|
||||
health_score=health_score
|
||||
)
|
||||
|
||||
def _check_error_handling(self, nodes: List[Dict]) -> List[Dict]:
|
||||
"""Check for missing error handling in nodes"""
|
||||
issues = []
|
||||
|
||||
for node in nodes:
|
||||
node_type = node.get('type', '')
|
||||
if node_type in ['n8n-nodes-base.httpRequest', 'n8n-nodes-base.webhook']:
|
||||
# Check if error handling is configured
|
||||
parameters = node.get('parameters', {})
|
||||
if not parameters.get('continueOnFail') and not parameters.get('errorHandling'):
|
||||
issues.append({
|
||||
'type': 'missing_error_handling',
|
||||
'node': node.get('name'),
|
||||
'severity': 'medium',
|
||||
'description': f"Node '{node.get('name')}' lacks error handling configuration"
|
||||
})
|
||||
|
||||
return issues
|
||||
|
||||
def _check_credential_usage(self, nodes: List[Dict]) -> List[Dict]:
|
||||
"""Check for hardcoded credentials"""
|
||||
issues = []
|
||||
|
||||
for node in nodes:
|
||||
parameters = node.get('parameters', {})
|
||||
param_str = json.dumps(parameters)
|
||||
|
||||
# Look for potential hardcoded credentials
|
||||
suspicious_patterns = [
|
||||
r'password.*["\'].*["\']',
|
||||
r'token.*["\'].*["\']',
|
||||
r'key.*["\'].*["\']',
|
||||
r'secret.*["\'].*["\']'
|
||||
]
|
||||
|
||||
for pattern in suspicious_patterns:
|
||||
if re.search(pattern, param_str, re.IGNORECASE):
|
||||
issues.append({
|
||||
'type': 'hardcoded_credentials',
|
||||
'node': node.get('name'),
|
||||
'severity': 'high',
|
||||
'description': f"Node '{node.get('name')}' may contain hardcoded credentials"
|
||||
})
|
||||
break
|
||||
|
||||
return issues
|
||||
|
||||
def _check_node_configurations(self, nodes: List[Dict]) -> List[Dict]:
|
||||
"""Check node configurations for common issues"""
|
||||
issues = []
|
||||
|
||||
for node in nodes:
|
||||
node_type = node.get('type', '')
|
||||
parameters = node.get('parameters', {})
|
||||
|
||||
# Check HTTP request timeouts
|
||||
if node_type == 'n8n-nodes-base.httpRequest':
|
||||
timeout = parameters.get('timeout', 300) # Default 5 minutes
|
||||
if timeout < 30:
|
||||
issues.append({
|
||||
'type': 'timeout_issues',
|
||||
'node': node.get('name'),
|
||||
'severity': 'low',
|
||||
'description': f"HTTP timeout ({timeout}s) may be too aggressive"
|
||||
})
|
||||
|
||||
# Check for missing required parameters
|
||||
if not parameters:
|
||||
issues.append({
|
||||
'type': 'missing_validation',
|
||||
'node': node.get('name'),
|
||||
'severity': 'medium',
|
||||
'description': f"Node '{node.get('name')}' has no parameters configured"
|
||||
})
|
||||
|
||||
return issues
|
||||
|
||||
def _check_workflow_complexity(self, nodes: List[Dict], connections: Dict) -> List[Dict]:
|
||||
"""Check workflow complexity and structure"""
|
||||
issues = []
|
||||
|
||||
# Check for overly complex workflows (>20 nodes)
|
||||
if len(nodes) > 20:
|
||||
issues.append({
|
||||
'type': 'workflow_complexity',
|
||||
'severity': 'medium',
|
||||
'description': f"Workflow has {len(nodes)} nodes, consider breaking into smaller workflows"
|
||||
})
|
||||
|
||||
# Check for disconnected nodes
|
||||
connected_nodes = set()
|
||||
for source, targets in connections.items():
|
||||
connected_nodes.add(source)
|
||||
for target_list in targets.get('main', []):
|
||||
for target in target_list:
|
||||
connected_nodes.add(target.get('node'))
|
||||
|
||||
all_nodes = {node.get('name') for node in nodes}
|
||||
disconnected = all_nodes - connected_nodes
|
||||
|
||||
if disconnected:
|
||||
issues.append({
|
||||
'type': 'disconnected_nodes',
|
||||
'severity': 'high',
|
||||
'description': f"Disconnected nodes found: {', '.join(disconnected)}"
|
||||
})
|
||||
|
||||
return issues
|
||||
|
||||
def _extract_errors(self, execution_data: Dict) -> List[Dict]:
|
||||
"""Extract error information from execution data"""
|
||||
errors = []
|
||||
|
||||
if 'resultData' in execution_data:
|
||||
result_data = execution_data['resultData']
|
||||
if 'error' in result_data:
|
||||
error_info = result_data['error']
|
||||
errors.append({
|
||||
'message': error_info.get('message', ''),
|
||||
'stack': error_info.get('stack', ''),
|
||||
'type': error_info.get('name', 'Unknown'),
|
||||
'node': error_info.get('node', 'Unknown')
|
||||
})
|
||||
|
||||
return errors
|
||||
|
||||
def _categorize_error(self, error: Dict) -> str:
|
||||
"""Categorize error by type"""
|
||||
message = error.get('message', '').lower()
|
||||
|
||||
if 'timeout' in message:
|
||||
return 'timeout_error'
|
||||
elif 'connection' in message or 'network' in message:
|
||||
return 'connection_error'
|
||||
elif 'authentication' in message or 'unauthorized' in message:
|
||||
return 'auth_error'
|
||||
elif 'rate limit' in message or '429' in message:
|
||||
return 'rate_limit_error'
|
||||
elif 'validation' in message or 'invalid' in message:
|
||||
return 'validation_error'
|
||||
else:
|
||||
return 'generic_error'
|
||||
|
||||
def _analyze_node_timings(self, result_data: Dict) -> Dict:
|
||||
"""Analyze timing data for each node"""
|
||||
timings = {}
|
||||
|
||||
# Extract timing information from result data
|
||||
# This would need to be adapted based on actual N8N execution data structure
|
||||
run_data = result_data.get('runData', {})
|
||||
|
||||
for node_name, node_data in run_data.items():
|
||||
if isinstance(node_data, list) and node_data:
|
||||
node_execution = node_data[0]
|
||||
start_time = node_execution.get('startTime')
|
||||
execution_time = node_execution.get('executionTime')
|
||||
|
||||
if start_time and execution_time:
|
||||
timings[node_name] = {
|
||||
'start_time': start_time,
|
||||
'execution_time': execution_time,
|
||||
'data_count': len(node_execution.get('data', {}).get('main', []))
|
||||
}
|
||||
|
||||
return timings
|
||||
|
||||
def _identify_performance_issues(self, node_timings: Dict) -> List[Dict]:
|
||||
"""Identify performance issues from node timing data"""
|
||||
issues = []
|
||||
|
||||
for node_name, timing in node_timings.items():
|
||||
execution_time = timing.get('execution_time', 0)
|
||||
|
||||
# Flag nodes taking longer than 30 seconds
|
||||
if execution_time > 30000: # milliseconds
|
||||
issues.append({
|
||||
'type': 'slow_node',
|
||||
'node': node_name,
|
||||
'execution_time': execution_time,
|
||||
'description': f"Node '{node_name}' took {execution_time/1000:.2f}s to execute"
|
||||
})
|
||||
|
||||
return issues
|
||||
|
||||
def _calculate_performance_metrics(self, execution_analyses: List[Dict]) -> Dict:
|
||||
"""Calculate performance metrics from execution analyses"""
|
||||
if not execution_analyses:
|
||||
return {}
|
||||
|
||||
durations = [analysis['total_duration'] for analysis in execution_analyses if analysis['total_duration'] > 0]
|
||||
error_count = len([analysis for analysis in execution_analyses if analysis['status'] == 'error'])
|
||||
|
||||
return {
|
||||
'avg_duration': sum(durations) / len(durations) if durations else 0,
|
||||
'max_duration': max(durations) if durations else 0,
|
||||
'min_duration': min(durations) if durations else 0,
|
||||
'error_rate': (error_count / len(execution_analyses)) * 100,
|
||||
'total_executions': len(execution_analyses)
|
||||
}
|
||||
|
||||
def _calculate_complexity_score(self, nodes: List[Dict], connections: Dict) -> float:
|
||||
"""Calculate workflow complexity score (0-100)"""
|
||||
node_count = len(nodes)
|
||||
connection_count = sum(len(conns.get('main', [])) for conns in connections.values())
|
||||
|
||||
# Simple complexity calculation
|
||||
complexity = (node_count * 2) + connection_count
|
||||
|
||||
# Normalize to 0-100 scale
|
||||
return min(complexity / 2, 100)
|
||||
|
||||
def _calculate_health_score(self, structure_analysis: Dict, execution_analyses: List[Dict], error_patterns: List[Dict]) -> float:
|
||||
"""Calculate overall workflow health score (0-100)"""
|
||||
score = 100.0
|
||||
|
||||
# Deduct points for structural issues
|
||||
high_severity_issues = len([issue for issue in structure_analysis['issues'] if issue.get('severity') == 'high'])
|
||||
medium_severity_issues = len([issue for issue in structure_analysis['issues'] if issue.get('severity') == 'medium'])
|
||||
|
||||
score -= (high_severity_issues * 20)
|
||||
score -= (medium_severity_issues * 10)
|
||||
|
||||
# Deduct points for execution errors
|
||||
if execution_analyses:
|
||||
error_rate = len([analysis for analysis in execution_analyses if analysis['status'] == 'error']) / len(execution_analyses)
|
||||
score -= (error_rate * 50)
|
||||
|
||||
# Deduct points for recurring error patterns
|
||||
for pattern in error_patterns:
|
||||
if pattern['frequency'] > 1:
|
||||
score -= min(pattern['frequency'] * 5, 30)
|
||||
|
||||
return max(score, 0)
|
||||
|
||||
def _generate_suggestions(self, issues: List[Dict]) -> List[str]:
|
||||
"""Generate improvement suggestions based on issues"""
|
||||
suggestions = []
|
||||
|
||||
for issue in issues:
|
||||
issue_type = issue.get('type')
|
||||
|
||||
if issue_type == 'missing_error_handling':
|
||||
suggestions.append("Add error handling to HTTP and webhook nodes using 'Continue on Fail' option")
|
||||
elif issue_type == 'hardcoded_credentials':
|
||||
suggestions.append("Move credentials to N8N credential store for better security")
|
||||
elif issue_type == 'timeout_issues':
|
||||
suggestions.append("Review and adjust timeout settings based on expected response times")
|
||||
elif issue_type == 'workflow_complexity':
|
||||
suggestions.append("Consider breaking complex workflow into smaller, manageable sub-workflows")
|
||||
elif issue_type == 'disconnected_nodes':
|
||||
suggestions.append("Remove unused nodes or connect them to the workflow")
|
||||
|
||||
return list(set(suggestions)) # Remove duplicates
|
||||
|
||||
def _suggest_error_fixes(self, error_patterns: List[Dict]) -> List[str]:
|
||||
"""Suggest fixes for common error patterns"""
|
||||
suggestions = []
|
||||
|
||||
for pattern in error_patterns:
|
||||
pattern_type = pattern['pattern']
|
||||
|
||||
if pattern_type == 'timeout_error':
|
||||
suggestions.append("Increase timeout settings or implement retry logic for timeout-prone operations")
|
||||
elif pattern_type == 'connection_error':
|
||||
suggestions.append("Add connection retry logic and check network connectivity")
|
||||
elif pattern_type == 'auth_error':
|
||||
suggestions.append("Verify and refresh authentication credentials")
|
||||
elif pattern_type == 'rate_limit_error':
|
||||
suggestions.append("Implement rate limiting and backoff strategies")
|
||||
elif pattern_type == 'validation_error':
|
||||
suggestions.append("Add input validation and data sanitization steps")
|
||||
|
||||
return suggestions
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Quick test of the analyzer
|
||||
analyzer = WorkflowAnalyzer()
|
||||
print("Workflow Analyzer initialized successfully.")
|
||||
445
claude_n8n/tools/workflow_controller.py
Normal file
445
claude_n8n/tools/workflow_controller.py
Normal file
@@ -0,0 +1,445 @@
|
||||
"""
|
||||
Workflow Controller for N8N
|
||||
|
||||
This module provides functionality to control N8N workflows - stopping all workflows
|
||||
and managing workflow activation states for testing purposes.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import List, Dict, Any, Optional
|
||||
from .n8n_client import N8NClient
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class WorkflowController:
|
||||
"""Controller for managing N8N workflow states."""
|
||||
|
||||
def __init__(self, client: Optional[N8NClient] = None):
|
||||
"""
|
||||
Initialize the workflow controller.
|
||||
|
||||
Args:
|
||||
client: N8N client instance. If None, creates a new one.
|
||||
"""
|
||||
self.client = client or N8NClient()
|
||||
self._original_states = {} # Store original workflow states for restoration
|
||||
|
||||
def stop_all_workflows(self, exclude_ids: Optional[List[str]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Stop (deactivate) all workflows except those in exclude list.
|
||||
|
||||
Args:
|
||||
exclude_ids: List of workflow IDs to exclude from stopping
|
||||
|
||||
Returns:
|
||||
Summary of stopped workflows
|
||||
"""
|
||||
exclude_ids = exclude_ids or []
|
||||
workflows = self.client.list_workflows()
|
||||
|
||||
stopped = []
|
||||
failed = []
|
||||
skipped = []
|
||||
|
||||
for workflow in workflows:
|
||||
workflow_id = workflow.get('id')
|
||||
workflow_name = workflow.get('name', 'Unknown')
|
||||
is_active = workflow.get('active', False)
|
||||
|
||||
if workflow_id in exclude_ids:
|
||||
skipped.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name,
|
||||
'reason': 'excluded'
|
||||
})
|
||||
continue
|
||||
|
||||
if not is_active:
|
||||
skipped.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name,
|
||||
'reason': 'already_inactive'
|
||||
})
|
||||
continue
|
||||
|
||||
# Store original state for restoration
|
||||
self._original_states[workflow_id] = {
|
||||
'active': is_active,
|
||||
'name': workflow_name
|
||||
}
|
||||
|
||||
try:
|
||||
# Deactivate workflow
|
||||
updated_workflow = {
|
||||
**workflow,
|
||||
'active': False
|
||||
}
|
||||
self.client.update_workflow(workflow_id, updated_workflow)
|
||||
|
||||
stopped.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name,
|
||||
'was_active': is_active
|
||||
})
|
||||
logger.info(f"Stopped workflow: {workflow_name} ({workflow_id})")
|
||||
|
||||
except Exception as e:
|
||||
failed.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name,
|
||||
'error': str(e)
|
||||
})
|
||||
logger.error(f"Failed to stop workflow {workflow_name}: {e}")
|
||||
|
||||
summary = {
|
||||
'stopped': stopped,
|
||||
'failed': failed,
|
||||
'skipped': skipped,
|
||||
'total_processed': len(workflows),
|
||||
'stopped_count': len(stopped),
|
||||
'failed_count': len(failed),
|
||||
'skipped_count': len(skipped)
|
||||
}
|
||||
|
||||
logger.info(f"Workflow stop summary: {summary['stopped_count']} stopped, "
|
||||
f"{summary['failed_count']} failed, {summary['skipped_count']} skipped")
|
||||
|
||||
return summary
|
||||
|
||||
def start_workflow(self, workflow_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Start (activate) a specific workflow.
|
||||
|
||||
Args:
|
||||
workflow_id: ID of the workflow to start
|
||||
|
||||
Returns:
|
||||
Result of the operation
|
||||
"""
|
||||
try:
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
if workflow.get('active', False):
|
||||
return {
|
||||
'success': True,
|
||||
'message': f"Workflow {workflow.get('name', workflow_id)} is already active",
|
||||
'was_already_active': True
|
||||
}
|
||||
|
||||
# Activate workflow
|
||||
updated_workflow = {
|
||||
**workflow,
|
||||
'active': True
|
||||
}
|
||||
result = self.client.update_workflow(workflow_id, updated_workflow)
|
||||
|
||||
logger.info(f"Started workflow: {workflow.get('name', workflow_id)} ({workflow_id})")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'message': f"Successfully started workflow {workflow.get('name', workflow_id)}",
|
||||
'workflow': result,
|
||||
'was_already_active': False
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to start workflow {workflow_id}: {e}"
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'success': False,
|
||||
'error': error_msg,
|
||||
'workflow_id': workflow_id
|
||||
}
|
||||
|
||||
def stop_workflow(self, workflow_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Stop (deactivate) a specific workflow.
|
||||
|
||||
Args:
|
||||
workflow_id: ID of the workflow to stop
|
||||
|
||||
Returns:
|
||||
Result of the operation
|
||||
"""
|
||||
try:
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
if not workflow.get('active', False):
|
||||
return {
|
||||
'success': True,
|
||||
'message': f"Workflow {workflow.get('name', workflow_id)} is already inactive",
|
||||
'was_already_inactive': True
|
||||
}
|
||||
|
||||
# Store original state
|
||||
self._original_states[workflow_id] = {
|
||||
'active': True,
|
||||
'name': workflow.get('name', 'Unknown')
|
||||
}
|
||||
|
||||
# Deactivate workflow
|
||||
updated_workflow = {
|
||||
**workflow,
|
||||
'active': False
|
||||
}
|
||||
result = self.client.update_workflow(workflow_id, updated_workflow)
|
||||
|
||||
logger.info(f"Stopped workflow: {workflow.get('name', workflow_id)} ({workflow_id})")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'message': f"Successfully stopped workflow {workflow.get('name', workflow_id)}",
|
||||
'workflow': result,
|
||||
'was_already_inactive': False
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to stop workflow {workflow_id}: {e}"
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'success': False,
|
||||
'error': error_msg,
|
||||
'workflow_id': workflow_id
|
||||
}
|
||||
|
||||
def restore_original_states(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Restore workflows to their original states before stopping.
|
||||
|
||||
Returns:
|
||||
Summary of restoration results
|
||||
"""
|
||||
if not self._original_states:
|
||||
return {
|
||||
'restored': [],
|
||||
'failed': [],
|
||||
'message': 'No original states to restore'
|
||||
}
|
||||
|
||||
restored = []
|
||||
failed = []
|
||||
|
||||
for workflow_id, original_state in self._original_states.items():
|
||||
try:
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
|
||||
# Only restore if original state was active
|
||||
if original_state['active']:
|
||||
updated_workflow = {
|
||||
**workflow,
|
||||
'active': True
|
||||
}
|
||||
self.client.update_workflow(workflow_id, updated_workflow)
|
||||
|
||||
restored.append({
|
||||
'id': workflow_id,
|
||||
'name': original_state['name'],
|
||||
'restored_to': 'active'
|
||||
})
|
||||
logger.info(f"Restored workflow: {original_state['name']} ({workflow_id})")
|
||||
|
||||
except Exception as e:
|
||||
failed.append({
|
||||
'id': workflow_id,
|
||||
'name': original_state['name'],
|
||||
'error': str(e)
|
||||
})
|
||||
logger.error(f"Failed to restore workflow {original_state['name']}: {e}")
|
||||
|
||||
# Clear stored states after restoration attempt
|
||||
self._original_states.clear()
|
||||
|
||||
summary = {
|
||||
'restored': restored,
|
||||
'failed': failed,
|
||||
'restored_count': len(restored),
|
||||
'failed_count': len(failed)
|
||||
}
|
||||
|
||||
logger.info(f"Restoration summary: {summary['restored_count']} restored, "
|
||||
f"{summary['failed_count']} failed")
|
||||
|
||||
return summary
|
||||
|
||||
def get_workflow_states(self) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get current state of all workflows.
|
||||
|
||||
Returns:
|
||||
List of workflow states
|
||||
"""
|
||||
workflows = self.client.list_workflows()
|
||||
states = []
|
||||
|
||||
for workflow in workflows:
|
||||
states.append({
|
||||
'id': workflow.get('id'),
|
||||
'name': workflow.get('name', 'Unknown'),
|
||||
'active': workflow.get('active', False),
|
||||
'created_at': workflow.get('createdAt'),
|
||||
'updated_at': workflow.get('updatedAt'),
|
||||
'nodes_count': len(workflow.get('nodes', [])),
|
||||
'connections_count': len(workflow.get('connections', {}))
|
||||
})
|
||||
|
||||
return states
|
||||
|
||||
def set_workflow_inactive_with_manual_trigger(self, workflow_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Set a workflow to inactive state but ensure it has a manual trigger.
|
||||
This is useful for testing workflows manually.
|
||||
|
||||
Args:
|
||||
workflow_id: ID of the workflow to modify
|
||||
|
||||
Returns:
|
||||
Result of the operation
|
||||
"""
|
||||
try:
|
||||
workflow = self.client.get_workflow(workflow_id)
|
||||
workflow_name = workflow.get('name', 'Unknown')
|
||||
|
||||
# Check if workflow has a manual trigger node
|
||||
nodes = workflow.get('nodes', [])
|
||||
has_manual_trigger = any(
|
||||
node.get('type') == 'n8n-nodes-base.manualTrigger'
|
||||
for node in nodes
|
||||
)
|
||||
|
||||
# Store original state
|
||||
self._original_states[workflow_id] = {
|
||||
'active': workflow.get('active', False),
|
||||
'name': workflow_name
|
||||
}
|
||||
|
||||
# Set workflow to inactive
|
||||
updated_workflow = {
|
||||
**workflow,
|
||||
'active': False
|
||||
}
|
||||
|
||||
# If no manual trigger exists, add one
|
||||
if not has_manual_trigger:
|
||||
logger.info(f"Adding manual trigger to workflow: {workflow_name}")
|
||||
|
||||
# Create manual trigger node
|
||||
manual_trigger_node = {
|
||||
"id": f"manual_trigger_{workflow_id}",
|
||||
"name": "Manual Trigger",
|
||||
"type": "n8n-nodes-base.manualTrigger",
|
||||
"typeVersion": 1,
|
||||
"position": [100, 100],
|
||||
"parameters": {}
|
||||
}
|
||||
|
||||
# Add the manual trigger node
|
||||
updated_workflow['nodes'] = [manual_trigger_node] + nodes
|
||||
|
||||
# Update connections to include manual trigger
|
||||
# This is a simplified approach - in practice, you might need more sophisticated logic
|
||||
if not updated_workflow.get('connections'):
|
||||
updated_workflow['connections'] = {}
|
||||
|
||||
# Connect manual trigger to first node if there are other nodes
|
||||
if nodes:
|
||||
first_node_name = nodes[0].get('name')
|
||||
if first_node_name:
|
||||
updated_workflow['connections']['Manual Trigger'] = {
|
||||
'main': [[{'node': first_node_name, 'type': 'main', 'index': 0}]]
|
||||
}
|
||||
|
||||
result = self.client.update_workflow(workflow_id, updated_workflow)
|
||||
|
||||
logger.info(f"Set workflow to inactive with manual trigger: {workflow_name} ({workflow_id})")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'message': f"Successfully set {workflow_name} to inactive with manual trigger",
|
||||
'workflow': result,
|
||||
'had_manual_trigger': has_manual_trigger,
|
||||
'added_manual_trigger': not has_manual_trigger
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to set workflow to inactive with manual trigger {workflow_id}: {e}"
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'success': False,
|
||||
'error': error_msg,
|
||||
'workflow_id': workflow_id
|
||||
}
|
||||
|
||||
def get_active_workflows(self) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get list of currently active workflows.
|
||||
|
||||
Returns:
|
||||
List of active workflows
|
||||
"""
|
||||
workflows = self.client.list_workflows()
|
||||
active_workflows = [
|
||||
{
|
||||
'id': w.get('id'),
|
||||
'name': w.get('name', 'Unknown'),
|
||||
'nodes_count': len(w.get('nodes', [])),
|
||||
'created_at': w.get('createdAt'),
|
||||
'updated_at': w.get('updatedAt')
|
||||
}
|
||||
for w in workflows
|
||||
if w.get('active', False)
|
||||
]
|
||||
|
||||
return active_workflows
|
||||
|
||||
def emergency_stop_all(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Emergency stop of all workflows without storing states.
|
||||
Use this when you need to quickly stop everything.
|
||||
|
||||
Returns:
|
||||
Summary of emergency stop results
|
||||
"""
|
||||
logger.warning("Emergency stop initiated - stopping all workflows")
|
||||
|
||||
workflows = self.client.list_workflows()
|
||||
stopped = []
|
||||
failed = []
|
||||
|
||||
for workflow in workflows:
|
||||
workflow_id = workflow.get('id')
|
||||
workflow_name = workflow.get('name', 'Unknown')
|
||||
|
||||
if not workflow.get('active', False):
|
||||
continue # Skip already inactive workflows
|
||||
|
||||
try:
|
||||
updated_workflow = {
|
||||
**workflow,
|
||||
'active': False
|
||||
}
|
||||
self.client.update_workflow(workflow_id, updated_workflow)
|
||||
|
||||
stopped.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
failed.append({
|
||||
'id': workflow_id,
|
||||
'name': workflow_name,
|
||||
'error': str(e)
|
||||
})
|
||||
|
||||
summary = {
|
||||
'stopped': stopped,
|
||||
'failed': failed,
|
||||
'stopped_count': len(stopped),
|
||||
'failed_count': len(failed),
|
||||
'message': f"Emergency stop completed: {len(stopped)} stopped, {len(failed)} failed"
|
||||
}
|
||||
|
||||
logger.info(summary['message'])
|
||||
return summary
|
||||
|
||||
def create_workflow_controller():
|
||||
"""Create a workflow controller instance."""
|
||||
return WorkflowController()
|
||||
460
claude_n8n/tools/workflow_improver.py
Normal file
460
claude_n8n/tools/workflow_improver.py
Normal file
@@ -0,0 +1,460 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Workflow Improver - Iterative improvement and testing framework for N8N workflows
|
||||
Implements automated testing, optimization, and iterative refinement capabilities
|
||||
"""
|
||||
|
||||
import json
|
||||
import copy
|
||||
from typing import Dict, List, Optional, Tuple, Any
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
import logging
|
||||
|
||||
|
||||
@dataclass
|
||||
class TestCase:
|
||||
"""Represents a test case for workflow validation"""
|
||||
name: str
|
||||
input_data: Dict
|
||||
expected_output: Optional[Dict] = None
|
||||
expected_status: str = "success"
|
||||
description: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class ImprovementResult:
|
||||
"""Result of workflow improvement iteration"""
|
||||
iteration: int
|
||||
original_workflow: Dict
|
||||
improved_workflow: Dict
|
||||
test_results: List[Dict]
|
||||
improvements_made: List[str]
|
||||
performance_metrics: Dict
|
||||
success: bool
|
||||
error_message: Optional[str] = None
|
||||
|
||||
|
||||
class WorkflowImprover:
|
||||
"""Implements iterative workflow improvement and testing"""
|
||||
|
||||
def __init__(self, n8n_client, analyzer, monitor):
|
||||
"""Initialize workflow improver"""
|
||||
self.client = n8n_client
|
||||
self.analyzer = analyzer
|
||||
self.monitor = monitor
|
||||
self.logger = self._setup_logger()
|
||||
|
||||
# Improvement strategies
|
||||
self.improvement_strategies = {
|
||||
'add_error_handling': self._add_error_handling,
|
||||
'optimize_timeouts': self._optimize_timeouts,
|
||||
'add_retry_logic': self._add_retry_logic,
|
||||
'improve_validation': self._improve_validation,
|
||||
'optimize_performance': self._optimize_performance,
|
||||
'fix_connections': self._fix_connections
|
||||
}
|
||||
|
||||
def _setup_logger(self) -> logging.Logger:
|
||||
"""Setup logging for the improver"""
|
||||
logger = logging.getLogger('N8NWorkflowImprover')
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
if not logger.handlers:
|
||||
handler = logging.StreamHandler()
|
||||
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
|
||||
return logger
|
||||
|
||||
def create_test_suite(self, workflow: Dict, sample_data: List[Dict] = None) -> List[TestCase]:
|
||||
"""Create comprehensive test suite for a workflow"""
|
||||
test_cases = []
|
||||
|
||||
# Basic functionality test
|
||||
test_cases.append(TestCase(
|
||||
name="basic_functionality",
|
||||
input_data=sample_data[0] if sample_data else {},
|
||||
expected_status="success",
|
||||
description="Test basic workflow functionality"
|
||||
))
|
||||
|
||||
# Error handling tests
|
||||
test_cases.append(TestCase(
|
||||
name="invalid_input",
|
||||
input_data={"invalid": "data"},
|
||||
expected_status="error",
|
||||
description="Test error handling with invalid input"
|
||||
))
|
||||
|
||||
# Empty data test
|
||||
test_cases.append(TestCase(
|
||||
name="empty_input",
|
||||
input_data={},
|
||||
expected_status="success",
|
||||
description="Test workflow with empty input data"
|
||||
))
|
||||
|
||||
# Large data test (if applicable)
|
||||
if sample_data and len(sample_data) > 1:
|
||||
test_cases.append(TestCase(
|
||||
name="large_dataset",
|
||||
input_data={"batch": sample_data},
|
||||
expected_status="success",
|
||||
description="Test workflow with larger dataset"
|
||||
))
|
||||
|
||||
return test_cases
|
||||
|
||||
def run_test_suite(self, workflow_id: str, test_cases: List[TestCase]) -> List[Dict]:
|
||||
"""Run complete test suite against a workflow"""
|
||||
results = []
|
||||
|
||||
for test_case in test_cases:
|
||||
self.logger.info(f"Running test case: {test_case.name}")
|
||||
|
||||
try:
|
||||
# Execute workflow with test data
|
||||
execution_event = self.monitor.execute_and_monitor(
|
||||
workflow_id,
|
||||
test_case.input_data,
|
||||
timeout=120
|
||||
)
|
||||
|
||||
# Analyze results
|
||||
test_result = {
|
||||
'test_name': test_case.name,
|
||||
'description': test_case.description,
|
||||
'input_data': test_case.input_data,
|
||||
'expected_status': test_case.expected_status,
|
||||
'actual_status': execution_event.status.value,
|
||||
'execution_time': execution_event.duration,
|
||||
'passed': execution_event.status.value == test_case.expected_status,
|
||||
'execution_id': execution_event.execution_id,
|
||||
'error_message': execution_event.error_message,
|
||||
'timestamp': datetime.now().isoformat()
|
||||
}
|
||||
|
||||
# Validate output if expected output is provided
|
||||
if test_case.expected_output and execution_event.node_data:
|
||||
output_match = self._validate_output(
|
||||
execution_event.node_data,
|
||||
test_case.expected_output
|
||||
)
|
||||
test_result['output_validation'] = output_match
|
||||
test_result['passed'] = test_result['passed'] and output_match
|
||||
|
||||
results.append(test_result)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Test case {test_case.name} failed with exception: {e}")
|
||||
results.append({
|
||||
'test_name': test_case.name,
|
||||
'description': test_case.description,
|
||||
'passed': False,
|
||||
'error_message': str(e),
|
||||
'timestamp': datetime.now().isoformat()
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
def iterative_improvement(self, workflow_id: str, test_cases: List[TestCase],
|
||||
max_iterations: int = 5) -> List[ImprovementResult]:
|
||||
"""Perform iterative improvement on a workflow"""
|
||||
results = []
|
||||
current_workflow = self.client.get_workflow(workflow_id)
|
||||
|
||||
for iteration in range(max_iterations):
|
||||
self.logger.info(f"Starting improvement iteration {iteration + 1}")
|
||||
|
||||
try:
|
||||
# Run tests on current workflow
|
||||
test_results = self.run_test_suite(workflow_id, test_cases)
|
||||
|
||||
# Analyze workflow for issues
|
||||
analysis = self.analyzer.analyze_workflow_structure(current_workflow)
|
||||
|
||||
# Check if workflow is already performing well
|
||||
passed_tests = len([r for r in test_results if r.get('passed', False)])
|
||||
test_success_rate = passed_tests / len(test_results) if test_results else 0
|
||||
|
||||
if test_success_rate >= 0.9 and len(analysis['issues']) == 0:
|
||||
self.logger.info("Workflow is already performing well, no improvements needed")
|
||||
break
|
||||
|
||||
# Generate improvements
|
||||
improved_workflow, improvements_made = self._generate_improvements(
|
||||
current_workflow, analysis, test_results
|
||||
)
|
||||
|
||||
if not improvements_made:
|
||||
self.logger.info("No more improvements can be made")
|
||||
break
|
||||
|
||||
# Apply improvements
|
||||
self.client.update_workflow(workflow_id, improved_workflow)
|
||||
|
||||
# Run tests again to validate improvements
|
||||
new_test_results = self.run_test_suite(workflow_id, test_cases)
|
||||
|
||||
# Calculate performance metrics
|
||||
performance_metrics = self._calculate_performance_improvement(
|
||||
test_results, new_test_results
|
||||
)
|
||||
|
||||
result = ImprovementResult(
|
||||
iteration=iteration + 1,
|
||||
original_workflow=current_workflow,
|
||||
improved_workflow=improved_workflow,
|
||||
test_results=new_test_results,
|
||||
improvements_made=improvements_made,
|
||||
performance_metrics=performance_metrics,
|
||||
success=True
|
||||
)
|
||||
|
||||
results.append(result)
|
||||
current_workflow = improved_workflow
|
||||
|
||||
self.logger.info(f"Iteration {iteration + 1} completed with {len(improvements_made)} improvements")
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in iteration {iteration + 1}: {e}")
|
||||
result = ImprovementResult(
|
||||
iteration=iteration + 1,
|
||||
original_workflow=current_workflow,
|
||||
improved_workflow=current_workflow,
|
||||
test_results=[],
|
||||
improvements_made=[],
|
||||
performance_metrics={},
|
||||
success=False,
|
||||
error_message=str(e)
|
||||
)
|
||||
results.append(result)
|
||||
break
|
||||
|
||||
return results
|
||||
|
||||
def _generate_improvements(self, workflow: Dict, analysis: Dict,
|
||||
test_results: List[Dict]) -> Tuple[Dict, List[str]]:
|
||||
"""Generate workflow improvements based on analysis and test results"""
|
||||
improved_workflow = copy.deepcopy(workflow)
|
||||
improvements_made = []
|
||||
|
||||
# Apply improvements based on structural issues
|
||||
for issue in analysis.get('issues', []):
|
||||
issue_type = issue.get('type')
|
||||
|
||||
if issue_type in self.improvement_strategies:
|
||||
strategy_func = self.improvement_strategies[issue_type]
|
||||
workflow_modified, improvement_desc = strategy_func(
|
||||
improved_workflow, issue
|
||||
)
|
||||
|
||||
if workflow_modified:
|
||||
improvements_made.append(improvement_desc)
|
||||
|
||||
# Apply improvements based on test failures
|
||||
failed_tests = [r for r in test_results if not r.get('passed', False)]
|
||||
for test_result in failed_tests:
|
||||
improvement = self._improve_based_on_test_failure(
|
||||
improved_workflow, test_result
|
||||
)
|
||||
if improvement:
|
||||
improvements_made.append(improvement)
|
||||
|
||||
return improved_workflow, improvements_made
|
||||
|
||||
def _add_error_handling(self, workflow: Dict, issue: Dict) -> Tuple[bool, str]:
|
||||
"""Add error handling to nodes"""
|
||||
node_name = issue.get('node')
|
||||
if not node_name:
|
||||
return False, ""
|
||||
|
||||
# Find the node and add error handling
|
||||
for node in workflow.get('nodes', []):
|
||||
if node.get('name') == node_name:
|
||||
parameters = node.get('parameters', {})
|
||||
parameters['continueOnFail'] = True
|
||||
|
||||
# Add error handling parameters based on node type
|
||||
node_type = node.get('type', '')
|
||||
if 'httpRequest' in node_type:
|
||||
parameters['retry'] = {
|
||||
'retries': 3,
|
||||
'waitBetween': 1000
|
||||
}
|
||||
|
||||
return True, f"Added error handling to node '{node_name}'"
|
||||
|
||||
return False, ""
|
||||
|
||||
def _optimize_timeouts(self, workflow: Dict, issue: Dict) -> Tuple[bool, str]:
|
||||
"""Optimize timeout settings"""
|
||||
node_name = issue.get('node')
|
||||
if not node_name:
|
||||
return False, ""
|
||||
|
||||
for node in workflow.get('nodes', []):
|
||||
if node.get('name') == node_name:
|
||||
parameters = node.get('parameters', {})
|
||||
current_timeout = parameters.get('timeout', 300)
|
||||
|
||||
# Increase timeout if it's too aggressive
|
||||
if current_timeout < 60:
|
||||
parameters['timeout'] = 60
|
||||
return True, f"Increased timeout for node '{node_name}' to 60 seconds"
|
||||
|
||||
return False, ""
|
||||
|
||||
def _add_retry_logic(self, workflow: Dict, issue: Dict) -> Tuple[bool, str]:
|
||||
"""Add retry logic to nodes"""
|
||||
# This would add retry nodes or modify existing nodes with retry parameters
|
||||
return False, "Retry logic addition not implemented"
|
||||
|
||||
def _improve_validation(self, workflow: Dict, issue: Dict) -> Tuple[bool, str]:
|
||||
"""Improve input validation"""
|
||||
# This would add validation nodes or improve existing validation
|
||||
return False, "Validation improvement not implemented"
|
||||
|
||||
def _optimize_performance(self, workflow: Dict, issue: Dict) -> Tuple[bool, str]:
|
||||
"""Optimize workflow performance"""
|
||||
# This could involve optimizing loops, reducing unnecessary operations, etc.
|
||||
return False, "Performance optimization not implemented"
|
||||
|
||||
def _fix_connections(self, workflow: Dict, issue: Dict) -> Tuple[bool, str]:
|
||||
"""Fix disconnected nodes"""
|
||||
description = issue.get('description', '')
|
||||
|
||||
# Extract disconnected node names from description
|
||||
if "Disconnected nodes found:" in description:
|
||||
disconnected_nodes = description.split(": ")[1].split(", ")
|
||||
|
||||
# Remove disconnected nodes
|
||||
original_count = len(workflow.get('nodes', []))
|
||||
workflow['nodes'] = [
|
||||
node for node in workflow.get('nodes', [])
|
||||
if node.get('name') not in disconnected_nodes
|
||||
]
|
||||
|
||||
removed_count = original_count - len(workflow['nodes'])
|
||||
if removed_count > 0:
|
||||
return True, f"Removed {removed_count} disconnected nodes"
|
||||
|
||||
return False, ""
|
||||
|
||||
def _improve_based_on_test_failure(self, workflow: Dict, test_result: Dict) -> Optional[str]:
|
||||
"""Improve workflow based on specific test failure"""
|
||||
test_name = test_result.get('test_name')
|
||||
error_message = test_result.get('error_message', '')
|
||||
|
||||
if test_name == "invalid_input" and "validation" in error_message.lower():
|
||||
# Add input validation
|
||||
return "Added input validation based on test failure"
|
||||
|
||||
elif "timeout" in error_message.lower():
|
||||
# Increase timeouts
|
||||
return "Increased timeouts based on test failure"
|
||||
|
||||
return None
|
||||
|
||||
def _validate_output(self, actual_output: Dict, expected_output: Dict) -> bool:
|
||||
"""Validate workflow output against expected results"""
|
||||
try:
|
||||
# Simple validation - check if expected keys exist and values match
|
||||
for key, expected_value in expected_output.items():
|
||||
if key not in actual_output:
|
||||
return False
|
||||
|
||||
if isinstance(expected_value, dict):
|
||||
if not self._validate_output(actual_output[key], expected_value):
|
||||
return False
|
||||
elif actual_output[key] != expected_value:
|
||||
return False
|
||||
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def _calculate_performance_improvement(self, old_results: List[Dict],
|
||||
new_results: List[Dict]) -> Dict:
|
||||
"""Calculate performance improvement metrics"""
|
||||
old_success_rate = len([r for r in old_results if r.get('passed', False)]) / len(old_results) if old_results else 0
|
||||
new_success_rate = len([r for r in new_results if r.get('passed', False)]) / len(new_results) if new_results else 0
|
||||
|
||||
old_avg_time = sum([r.get('execution_time', 0) for r in old_results if r.get('execution_time')]) / len(old_results) if old_results else 0
|
||||
new_avg_time = sum([r.get('execution_time', 0) for r in new_results if r.get('execution_time')]) / len(new_results) if new_results else 0
|
||||
|
||||
return {
|
||||
'success_rate_improvement': new_success_rate - old_success_rate,
|
||||
'performance_improvement_percent': ((old_avg_time - new_avg_time) / old_avg_time * 100) if old_avg_time > 0 else 0,
|
||||
'old_success_rate': old_success_rate,
|
||||
'new_success_rate': new_success_rate,
|
||||
'old_avg_execution_time': old_avg_time,
|
||||
'new_avg_execution_time': new_avg_time
|
||||
}
|
||||
|
||||
def create_test_data_from_execution(self, execution_id: str) -> Dict:
|
||||
"""Create test data from a successful execution"""
|
||||
try:
|
||||
execution = self.client.get_execution(execution_id)
|
||||
|
||||
if execution.get('status') != 'success':
|
||||
raise ValueError("Can only create test data from successful executions")
|
||||
|
||||
# Extract input data from the execution
|
||||
data = execution.get('data', {})
|
||||
if 'resultData' in data and 'runData' in data['resultData']:
|
||||
run_data = data['resultData']['runData']
|
||||
|
||||
# Find the trigger or start node data
|
||||
for node_name, node_runs in run_data.items():
|
||||
if node_runs and 'data' in node_runs[0]:
|
||||
node_data = node_runs[0]['data']
|
||||
if 'main' in node_data and node_data['main']:
|
||||
return node_data['main'][0][0] # First item of first output
|
||||
|
||||
return {}
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error creating test data from execution: {e}")
|
||||
return {}
|
||||
|
||||
def benchmark_workflow(self, workflow_id: str, iterations: int = 10) -> Dict:
|
||||
"""Benchmark workflow performance"""
|
||||
results = []
|
||||
|
||||
for i in range(iterations):
|
||||
try:
|
||||
execution_event = self.monitor.execute_and_monitor(workflow_id, {})
|
||||
results.append({
|
||||
'iteration': i + 1,
|
||||
'status': execution_event.status.value,
|
||||
'duration': execution_event.duration,
|
||||
'success': execution_event.status.value == 'success'
|
||||
})
|
||||
except Exception as e:
|
||||
results.append({
|
||||
'iteration': i + 1,
|
||||
'status': 'error',
|
||||
'duration': None,
|
||||
'success': False,
|
||||
'error': str(e)
|
||||
})
|
||||
|
||||
successful_runs = [r for r in results if r['success']]
|
||||
durations = [r['duration'] for r in successful_runs if r['duration']]
|
||||
|
||||
return {
|
||||
'total_iterations': iterations,
|
||||
'successful_runs': len(successful_runs),
|
||||
'success_rate': len(successful_runs) / iterations * 100,
|
||||
'average_duration': sum(durations) / len(durations) if durations else 0,
|
||||
'min_duration': min(durations) if durations else 0,
|
||||
'max_duration': max(durations) if durations else 0,
|
||||
'detailed_results': results
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Workflow Improver initialized successfully.")
|
||||
Reference in New Issue
Block a user