Agent DailyAgent Daily
videointermediate

The LESSONS.md System That Stops My OpenClaw AI Agents From Repeating Mistakes

By Andrewyoutube
View original on youtube

This content describes a LESSONS.md system designed to prevent AI agents from repeating mistakes by maintaining a persistent knowledge base of errors and corrections. The system captures what went wrong, why it happened, and how to avoid it in the future, creating a learning mechanism that persists across agent interactions. By implementing this approach, developers can significantly reduce recurring errors in OpenClaw AI agents and improve overall reliability.

Key Points

  • Create a LESSONS.md file to document mistakes, root causes, and solutions for your AI agents
  • Record each error with context: what the agent did wrong, why it failed, and the correct approach
  • Make the LESSONS.md file accessible to the agent in its system prompt or knowledge base for reference
  • Update LESSONS.md after each significant error to build a growing knowledge base of corrections
  • Reference relevant lessons before critical tasks to prime the agent with past learnings
  • Use specific, actionable language in lessons rather than vague corrections
  • Organize lessons by category or task type for easier retrieval and application
  • Review LESSONS.md periodically to identify patterns in recurring mistakes
  • Share LESSONS.md across agent instances to prevent the same errors in different deployments
  • Implement a feedback loop where agent errors automatically trigger lesson creation

Found this useful? Add it to a playbook for a step-by-step implementation guide.

Workflow Diagram

Start Process
Step A
Step B
Step C
Complete
Quality

Concepts

Artifacts (3)

LESSONS.md Templatemarkdowntemplate
# LESSONS.md - Agent Learning Log

## Format
```
### Lesson: [Brief Title]
**Date**: YYYY-MM-DD
**Category**: [Task/Feature]
**Error**: What the agent did wrong
**Root Cause**: Why it happened
**Solution**: How to avoid it
**Example**: Specific example of correct behavior
```

## Example Lessons

### Lesson: Always Validate API Responses
**Date**: 2024-01-15
**Category**: API Integration
**Error**: Agent assumed API response was successful without checking status codes
**Root Cause**: Missing error handling in response parsing
**Solution**: Always check response.status_code == 200 before processing data
**Example**: 
```python
if response.status_code == 200:
    data = response.json()
else:
    log_error(f"API failed: {response.status_code}")
```

### Lesson: Confirm User Intent Before Destructive Actions
**Date**: 2024-01-16
**Category**: User Safety
**Error**: Agent deleted files without confirmation
**Root Cause**: No confirmation step implemented
**Solution**: Always ask for explicit confirmation before delete/modify operations
**Example**: Ask "Are you sure you want to delete [filename]? (yes/no)"
System Prompt Integrationmarkdowntemplate
You are an AI agent assistant. Before performing any task:

1. Review the LESSONS.md file for relevant past mistakes
2. Check if your planned action matches any documented errors
3. Apply the documented solutions proactively
4. If you encounter an error, document it following the LESSONS.md format

---

## LESSONS.md Context
[INSERT LESSONS.MD CONTENT HERE]

---

Use these lessons to inform your decision-making and avoid repeating past mistakes.
Lesson Update Scriptbashscript
#!/bin/bash
# Script to add a new lesson to LESSONS.md

LESSONS_FILE="LESSONS.md"
DATE=$(date +%Y-%m-%d)

echo "=== Add New Lesson ==="
read -p "Lesson Title: " TITLE
read -p "Category: " CATEGORY
read -p "What went wrong: " ERROR
read -p "Root cause: " CAUSE
read -p "Solution: " SOLUTION
read -p "Example (optional): " EXAMPLE

cat >> "$LESSONS_FILE" << EOF

### Lesson: $TITLE
**Date**: $DATE
**Category**: $CATEGORY
**Error**: $ERROR
**Root Cause**: $CAUSE
**Solution**: $SOLUTION
**Example**: $EXAMPLE
EOF

echo "Lesson added to $LESSONS_FILE"
The LESSONS.md System That Stops My OpenClaw AI Agents From Repeating Mistakes | Agent Daily