releaseintermediate
[Release] openclaw/openclaw v2026.4.20-beta.2: openclaw 2026.4.20-beta.2
By steipetegithub
View original on githubOpenClaw v2026.4.20-beta.2 is a major release featuring UI/UX improvements to the onboarding wizard, strengthened agent prompts, tiered model pricing support, session maintenance optimizations, and numerous provider-specific enhancements for OpenAI, Anthropic, Moonshot Kimi, and other integrations. The release includes 40+ fixes addressing security, transport normalization, thinking mode handling, and plugin dependency management.
Key Points
- •Redesigned onboarding wizard with yellow warning banners, section headings, bulleted checklists, and loading spinners for better UX during model catalog loads
- •Enhanced default system prompts with clearer completion bias, live-state checks, weak-result recovery, and verification guidance for improved agent reliability
- •Implemented tiered model pricing from cached catalogs with bundled Moonshot Kimi K2.6/K2.5 cost estimates for accurate token-usage reporting
- •Enforced session entry caps and age pruning by default, with oversized store pruning at load time to prevent OOM issues in gateway before write path execution
- •Split cron runtime execution state into separate `jobs-state.json` file while keeping `jobs.json` stable for git-tracked job definitions
- •Added opt-in start/completion notices during context compaction for better visibility into agent operations
- •Defaulted Moonshot Kimi to k2.6 model with k2.5 available for compatibility; enabled `thinking.keep = 'all'` on k2.6 with conditional stripping for other models
- •Implemented per-group `systemPrompt` config forwarding in BlueBubbles for group-specific behavioral instructions with wildcard fallback matching
- •Added detached runtime registration contract for plugin executors to manage task lifecycle and cancellation independently
- •Fixed 30+ critical issues including YOLO exec rejection in security=full mode, legacy OpenAI Codex transport normalization, and thinking mode level resolution across GPT models
Found this useful? Add it to a playbook for a step-by-step implementation guide.
Workflow Diagram
Start Process
Step A
Step B
Step C
Complete